00:00:00.000 Started by upstream project "autotest-per-patch" build number 132558 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.269 > git --version # 'git version 2.39.2' 00:00:00.269 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.650 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.664 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.677 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.677 > git config core.sparsecheckout # timeout=10 00:00:07.689 > git read-tree -mu HEAD # timeout=10 00:00:07.708 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.731 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.732 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.829 [Pipeline] Start of Pipeline 00:00:07.844 [Pipeline] library 00:00:07.846 Loading library shm_lib@master 00:00:07.846 Library shm_lib@master is cached. Copying from home. 00:00:07.865 [Pipeline] node 00:00:22.867 Still waiting to schedule task 00:00:22.868 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:57.569 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:02:57.571 [Pipeline] { 00:02:57.584 [Pipeline] catchError 00:02:57.585 [Pipeline] { 00:02:57.605 [Pipeline] wrap 00:02:57.615 [Pipeline] { 00:02:57.624 [Pipeline] stage 00:02:57.626 [Pipeline] { (Prologue) 00:02:57.652 [Pipeline] echo 00:02:57.653 Node: VM-host-SM4 00:02:57.662 [Pipeline] cleanWs 00:02:57.675 [WS-CLEANUP] Deleting project workspace... 00:02:57.675 [WS-CLEANUP] Deferred wipeout is used... 00:02:57.681 [WS-CLEANUP] done 00:02:57.884 [Pipeline] setCustomBuildProperty 00:02:57.983 [Pipeline] httpRequest 00:02:58.299 [Pipeline] echo 00:02:58.301 Sorcerer 10.211.164.20 is alive 00:02:58.312 [Pipeline] retry 00:02:58.315 [Pipeline] { 00:02:58.331 [Pipeline] httpRequest 00:02:58.336 HttpMethod: GET 00:02:58.337 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:58.338 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:58.338 Response Code: HTTP/1.1 200 OK 00:02:58.339 Success: Status code 200 is in the accepted range: 200,404 00:02:58.339 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:58.485 [Pipeline] } 00:02:58.506 [Pipeline] // retry 00:02:58.515 [Pipeline] sh 00:02:58.796 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:02:59.071 [Pipeline] httpRequest 00:02:59.374 [Pipeline] echo 00:02:59.377 Sorcerer 10.211.164.20 is alive 00:02:59.388 [Pipeline] retry 00:02:59.390 [Pipeline] { 00:02:59.407 [Pipeline] httpRequest 00:02:59.411 HttpMethod: GET 00:02:59.412 URL: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:59.413 Sending request to url: http://10.211.164.20/packages/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:02:59.413 Response Code: HTTP/1.1 200 OK 00:02:59.414 Success: Status code 200 is in the accepted range: 200,404 00:02:59.414 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:03:01.646 [Pipeline] } 00:03:01.664 [Pipeline] // retry 00:03:01.673 [Pipeline] sh 00:03:01.950 + tar --no-same-owner -xf spdk_2f2acf4eb25cee406c156120cee22721275ca7fd.tar.gz 00:03:05.244 [Pipeline] sh 00:03:05.524 + git -C spdk log --oneline -n5 00:03:05.524 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:05.524 5592070b3 doc: update nvmf_tracing.md 00:03:05.524 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:03:05.524 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:03:05.524 aa58c9e0b dif: Add spdk_dif_pi_format_get_size() to use for NVMe PRACT 00:03:05.539 [Pipeline] writeFile 00:03:05.552 [Pipeline] sh 00:03:05.831 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:05.843 [Pipeline] sh 00:03:06.123 + cat autorun-spdk.conf 00:03:06.123 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.123 SPDK_TEST_NVME=1 00:03:06.123 SPDK_TEST_FTL=1 00:03:06.123 SPDK_TEST_ISAL=1 00:03:06.123 SPDK_RUN_ASAN=1 00:03:06.123 SPDK_RUN_UBSAN=1 00:03:06.123 SPDK_TEST_XNVME=1 00:03:06.123 SPDK_TEST_NVME_FDP=1 00:03:06.123 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.129 RUN_NIGHTLY=0 00:03:06.132 [Pipeline] } 00:03:06.146 [Pipeline] // stage 00:03:06.162 [Pipeline] stage 00:03:06.164 [Pipeline] { (Run VM) 00:03:06.177 [Pipeline] sh 00:03:06.459 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:06.459 + echo 'Start stage prepare_nvme.sh' 00:03:06.459 Start stage prepare_nvme.sh 00:03:06.459 + [[ -n 8 ]] 00:03:06.459 + disk_prefix=ex8 00:03:06.459 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:06.459 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:06.459 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:06.459 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.459 ++ SPDK_TEST_NVME=1 00:03:06.459 ++ SPDK_TEST_FTL=1 00:03:06.459 ++ SPDK_TEST_ISAL=1 00:03:06.459 ++ SPDK_RUN_ASAN=1 00:03:06.459 ++ SPDK_RUN_UBSAN=1 00:03:06.459 ++ SPDK_TEST_XNVME=1 00:03:06.459 ++ SPDK_TEST_NVME_FDP=1 00:03:06.459 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.459 ++ RUN_NIGHTLY=0 00:03:06.459 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:06.459 + nvme_files=() 00:03:06.459 + declare -A nvme_files 00:03:06.459 + backend_dir=/var/lib/libvirt/images/backends 00:03:06.459 + nvme_files['nvme.img']=5G 00:03:06.459 + nvme_files['nvme-cmb.img']=5G 00:03:06.459 + nvme_files['nvme-multi0.img']=4G 00:03:06.459 + nvme_files['nvme-multi1.img']=4G 00:03:06.459 + nvme_files['nvme-multi2.img']=4G 00:03:06.459 + nvme_files['nvme-openstack.img']=8G 00:03:06.459 + nvme_files['nvme-zns.img']=5G 00:03:06.459 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:06.459 + (( SPDK_TEST_FTL == 1 )) 00:03:06.459 + nvme_files["nvme-ftl.img"]=6G 00:03:06.459 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:06.459 + nvme_files["nvme-fdp.img"]=1G 00:03:06.459 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:06.459 + for nvme in "${!nvme_files[@]}" 00:03:06.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:03:06.459 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.459 + for nvme in "${!nvme_files[@]}" 00:03:06.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:03:06.459 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:06.459 + for nvme in "${!nvme_files[@]}" 00:03:06.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:03:06.459 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:06.459 + for nvme in "${!nvme_files[@]}" 00:03:06.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:03:06.459 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:06.459 + for nvme in "${!nvme_files[@]}" 00:03:06.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:03:06.459 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:06.719 + for nvme in "${!nvme_files[@]}" 00:03:06.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:03:06.719 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.719 + for nvme in "${!nvme_files[@]}" 00:03:06.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:03:06.719 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:06.719 + for nvme in "${!nvme_files[@]}" 00:03:06.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:03:06.719 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:06.719 + for nvme in "${!nvme_files[@]}" 00:03:06.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:03:07.008 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:07.008 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:03:07.008 + echo 'End stage prepare_nvme.sh' 00:03:07.008 End stage prepare_nvme.sh 00:03:07.057 [Pipeline] sh 00:03:07.341 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:07.341 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:07.341 00:03:07.341 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:07.341 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:07.341 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:07.341 HELP=0 00:03:07.341 DRY_RUN=0 00:03:07.341 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:03:07.341 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:07.341 NVME_AUTO_CREATE=0 00:03:07.341 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:03:07.341 NVME_CMB=,,,, 00:03:07.341 NVME_PMR=,,,, 00:03:07.341 NVME_ZNS=,,,, 00:03:07.341 NVME_MS=true,,,, 00:03:07.341 NVME_FDP=,,,on, 00:03:07.341 SPDK_VAGRANT_DISTRO=fedora39 00:03:07.341 SPDK_VAGRANT_VMCPU=10 00:03:07.341 SPDK_VAGRANT_VMRAM=12288 00:03:07.341 SPDK_VAGRANT_PROVIDER=libvirt 00:03:07.341 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:07.341 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:07.341 SPDK_OPENSTACK_NETWORK=0 00:03:07.341 VAGRANT_PACKAGE_BOX=0 00:03:07.341 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:07.341 FORCE_DISTRO=true 00:03:07.341 VAGRANT_BOX_VERSION= 00:03:07.341 EXTRA_VAGRANTFILES= 00:03:07.341 NIC_MODEL=e1000 00:03:07.341 00:03:07.341 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:07.341 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:11.526 Bringing machine 'default' up with 'libvirt' provider... 00:03:12.091 ==> default: Creating image (snapshot of base box volume). 00:03:12.349 ==> default: Creating domain with the following settings... 00:03:12.349 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732653066_94ee74e8319470ccc325 00:03:12.349 ==> default: -- Domain type: kvm 00:03:12.349 ==> default: -- Cpus: 10 00:03:12.349 ==> default: -- Feature: acpi 00:03:12.349 ==> default: -- Feature: apic 00:03:12.349 ==> default: -- Feature: pae 00:03:12.349 ==> default: -- Memory: 12288M 00:03:12.349 ==> default: -- Memory Backing: hugepages: 00:03:12.349 ==> default: -- Management MAC: 00:03:12.349 ==> default: -- Loader: 00:03:12.349 ==> default: -- Nvram: 00:03:12.349 ==> default: -- Base box: spdk/fedora39 00:03:12.349 ==> default: -- Storage pool: default 00:03:12.349 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732653066_94ee74e8319470ccc325.img (20G) 00:03:12.349 ==> default: -- Volume Cache: default 00:03:12.349 ==> default: -- Kernel: 00:03:12.349 ==> default: -- Initrd: 00:03:12.349 ==> default: -- Graphics Type: vnc 00:03:12.349 ==> default: -- Graphics Port: -1 00:03:12.349 ==> default: -- Graphics IP: 127.0.0.1 00:03:12.349 ==> default: -- Graphics Password: Not defined 00:03:12.349 ==> default: -- Video Type: cirrus 00:03:12.350 ==> default: -- Video VRAM: 9216 00:03:12.350 ==> default: -- Sound Type: 00:03:12.350 ==> default: -- Keymap: en-us 00:03:12.350 ==> default: -- TPM Path: 00:03:12.350 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:12.350 ==> default: -- Command line args: 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:12.350 ==> default: -> value=-drive, 00:03:12.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:12.350 ==> default: -> value=-device, 00:03:12.350 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:12.608 ==> default: Creating shared folders metadata... 00:03:12.608 ==> default: Starting domain. 00:03:14.509 ==> default: Waiting for domain to get an IP address... 00:03:29.386 ==> default: Waiting for SSH to become available... 00:03:31.288 ==> default: Configuring and enabling network interfaces... 00:03:35.543 default: SSH address: 192.168.121.167:22 00:03:35.543 default: SSH username: vagrant 00:03:35.543 default: SSH auth method: private key 00:03:38.161 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:48.131 ==> default: Mounting SSHFS shared folder... 00:03:49.504 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:49.504 ==> default: Checking Mount.. 00:03:50.879 ==> default: Folder Successfully Mounted! 00:03:50.879 ==> default: Running provisioner: file... 00:03:51.813 default: ~/.gitconfig => .gitconfig 00:03:52.379 00:03:52.379 SUCCESS! 00:03:52.379 00:03:52.379 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:52.379 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:52.379 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:52.379 00:03:52.386 [Pipeline] } 00:03:52.401 [Pipeline] // stage 00:03:52.409 [Pipeline] dir 00:03:52.410 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:52.411 [Pipeline] { 00:03:52.424 [Pipeline] catchError 00:03:52.425 [Pipeline] { 00:03:52.438 [Pipeline] sh 00:03:52.715 + vagrant ssh-config --host vagrant 00:03:52.715 + sed -ne /^Host/,$p 00:03:52.715 + tee ssh_conf 00:03:56.907 Host vagrant 00:03:56.907 HostName 192.168.121.167 00:03:56.907 User vagrant 00:03:56.907 Port 22 00:03:56.907 UserKnownHostsFile /dev/null 00:03:56.907 StrictHostKeyChecking no 00:03:56.907 PasswordAuthentication no 00:03:56.907 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:56.907 IdentitiesOnly yes 00:03:56.907 LogLevel FATAL 00:03:56.907 ForwardAgent yes 00:03:56.907 ForwardX11 yes 00:03:56.907 00:03:56.921 [Pipeline] withEnv 00:03:56.923 [Pipeline] { 00:03:56.938 [Pipeline] sh 00:03:57.213 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:57.213 source /etc/os-release 00:03:57.213 [[ -e /image.version ]] && img=$(< /image.version) 00:03:57.213 # Minimal, systemd-like check. 00:03:57.213 if [[ -e /.dockerenv ]]; then 00:03:57.213 # Clear garbage from the node's name: 00:03:57.213 # agt-er_autotest_547-896 -> autotest_547-896 00:03:57.213 # $HOSTNAME is the actual container id 00:03:57.213 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:57.213 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:57.213 # We can assume this is a mount from a host where container is running, 00:03:57.213 # so fetch its hostname to easily identify the target swarm worker. 00:03:57.213 container="$(< /etc/hostname) ($agent)" 00:03:57.213 else 00:03:57.213 # Fallback 00:03:57.213 container=$agent 00:03:57.213 fi 00:03:57.213 fi 00:03:57.213 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:57.213 00:03:57.482 [Pipeline] } 00:03:57.498 [Pipeline] // withEnv 00:03:57.507 [Pipeline] setCustomBuildProperty 00:03:57.523 [Pipeline] stage 00:03:57.526 [Pipeline] { (Tests) 00:03:57.546 [Pipeline] sh 00:03:57.827 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:58.100 [Pipeline] sh 00:03:58.383 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:58.656 [Pipeline] timeout 00:03:58.656 Timeout set to expire in 50 min 00:03:58.658 [Pipeline] { 00:03:58.673 [Pipeline] sh 00:03:58.951 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:59.517 HEAD is now at 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:03:59.530 [Pipeline] sh 00:03:59.857 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:00.130 [Pipeline] sh 00:04:00.409 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:00.684 [Pipeline] sh 00:04:00.962 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:01.221 ++ readlink -f spdk_repo 00:04:01.221 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:01.221 + [[ -n /home/vagrant/spdk_repo ]] 00:04:01.221 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:01.221 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:01.221 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:01.221 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:01.221 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:01.221 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:01.221 + cd /home/vagrant/spdk_repo 00:04:01.221 + source /etc/os-release 00:04:01.221 ++ NAME='Fedora Linux' 00:04:01.221 ++ VERSION='39 (Cloud Edition)' 00:04:01.221 ++ ID=fedora 00:04:01.221 ++ VERSION_ID=39 00:04:01.221 ++ VERSION_CODENAME= 00:04:01.221 ++ PLATFORM_ID=platform:f39 00:04:01.221 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:01.221 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:01.221 ++ LOGO=fedora-logo-icon 00:04:01.221 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:01.221 ++ HOME_URL=https://fedoraproject.org/ 00:04:01.221 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:01.221 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:01.221 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:01.221 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:01.221 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:01.221 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:01.221 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:01.221 ++ SUPPORT_END=2024-11-12 00:04:01.221 ++ VARIANT='Cloud Edition' 00:04:01.221 ++ VARIANT_ID=cloud 00:04:01.221 + uname -a 00:04:01.221 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:01.221 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.047 Hugepages 00:04:02.048 node hugesize free / total 00:04:02.048 node0 1048576kB 0 / 0 00:04:02.048 node0 2048kB 0 / 0 00:04:02.048 00:04:02.048 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.048 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.048 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.048 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:02.048 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:02.048 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:02.048 + rm -f /tmp/spdk-ld-path 00:04:02.048 + source autorun-spdk.conf 00:04:02.048 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:02.048 ++ SPDK_TEST_NVME=1 00:04:02.048 ++ SPDK_TEST_FTL=1 00:04:02.048 ++ SPDK_TEST_ISAL=1 00:04:02.048 ++ SPDK_RUN_ASAN=1 00:04:02.048 ++ SPDK_RUN_UBSAN=1 00:04:02.048 ++ SPDK_TEST_XNVME=1 00:04:02.048 ++ SPDK_TEST_NVME_FDP=1 00:04:02.048 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:02.048 ++ RUN_NIGHTLY=0 00:04:02.048 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:02.048 + [[ -n '' ]] 00:04:02.048 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:02.048 + for M in /var/spdk/build-*-manifest.txt 00:04:02.048 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:02.048 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:02.048 + for M in /var/spdk/build-*-manifest.txt 00:04:02.048 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:02.048 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:02.048 + for M in /var/spdk/build-*-manifest.txt 00:04:02.048 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:02.048 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:02.048 ++ uname 00:04:02.048 + [[ Linux == \L\i\n\u\x ]] 00:04:02.048 + sudo dmesg -T 00:04:02.048 + sudo dmesg --clear 00:04:02.048 + dmesg_pid=5308 00:04:02.048 + [[ Fedora Linux == FreeBSD ]] 00:04:02.048 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:02.048 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:02.048 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:02.048 + [[ -x /usr/src/fio-static/fio ]] 00:04:02.048 + sudo dmesg -Tw 00:04:02.048 + export FIO_BIN=/usr/src/fio-static/fio 00:04:02.048 + FIO_BIN=/usr/src/fio-static/fio 00:04:02.048 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:02.048 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:02.048 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:02.048 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:02.048 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:02.048 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:02.048 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:02.048 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:02.048 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:02.307 20:31:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:02.307 20:31:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:02.307 20:31:57 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:02.308 20:31:57 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:02.308 20:31:57 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:02.308 20:31:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:02.308 20:31:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:02.308 20:31:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:02.308 20:31:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:02.308 20:31:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:02.308 20:31:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:02.308 20:31:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.308 20:31:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.308 20:31:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.308 20:31:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.308 20:31:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.308 20:31:57 -- paths/export.sh@5 -- $ export PATH 00:04:02.308 20:31:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.308 20:31:57 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:02.308 20:31:57 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:02.308 20:31:57 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732653117.XXXXXX 00:04:02.308 20:31:57 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732653117.aMmHnB 00:04:02.308 20:31:57 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:02.308 20:31:57 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:02.308 20:31:57 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:02.308 20:31:57 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:02.308 20:31:57 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:02.308 20:31:57 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:02.308 20:31:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:02.308 20:31:57 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.308 20:31:57 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:02.308 20:31:57 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:02.308 20:31:57 -- pm/common@17 -- $ local monitor 00:04:02.308 20:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.308 20:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:02.308 20:31:57 -- pm/common@25 -- $ sleep 1 00:04:02.308 20:31:57 -- pm/common@21 -- $ date +%s 00:04:02.308 20:31:57 -- pm/common@21 -- $ date +%s 00:04:02.308 20:31:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732653117 00:04:02.308 20:31:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732653117 00:04:02.308 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732653117_collect-cpu-load.pm.log 00:04:02.308 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732653117_collect-vmstat.pm.log 00:04:03.243 20:31:58 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:03.243 20:31:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:03.243 20:31:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:03.243 20:31:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:03.243 20:31:58 -- spdk/autobuild.sh@16 -- $ date -u 00:04:03.243 Tue Nov 26 08:31:58 PM UTC 2024 00:04:03.502 20:31:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:03.502 v25.01-pre-271-g2f2acf4eb 00:04:03.502 20:31:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:03.502 20:31:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:03.502 20:31:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:03.502 20:31:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:03.502 20:31:58 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.502 ************************************ 00:04:03.502 START TEST asan 00:04:03.502 ************************************ 00:04:03.502 using asan 00:04:03.502 20:31:58 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:03.502 00:04:03.502 real 0m0.000s 00:04:03.502 user 0m0.000s 00:04:03.502 sys 0m0.000s 00:04:03.502 20:31:58 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:03.502 ************************************ 00:04:03.502 END TEST asan 00:04:03.502 ************************************ 00:04:03.502 20:31:58 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:03.502 20:31:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:03.502 20:31:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:03.502 20:31:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:03.502 20:31:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:03.502 20:31:58 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.502 ************************************ 00:04:03.502 START TEST ubsan 00:04:03.502 ************************************ 00:04:03.502 using ubsan 00:04:03.502 20:31:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:03.502 00:04:03.502 real 0m0.000s 00:04:03.502 user 0m0.000s 00:04:03.502 sys 0m0.000s 00:04:03.502 20:31:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:03.502 ************************************ 00:04:03.502 END TEST ubsan 00:04:03.502 20:31:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:03.502 ************************************ 00:04:03.502 20:31:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:03.502 20:31:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:03.502 20:31:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:03.502 20:31:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:03.502 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:03.502 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:04.068 Using 'verbs' RDMA provider 00:04:20.312 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:35.181 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:35.181 Creating mk/config.mk...done. 00:04:35.181 Creating mk/cc.flags.mk...done. 00:04:35.181 Type 'make' to build. 00:04:35.181 20:32:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:35.181 20:32:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:35.181 20:32:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:35.181 20:32:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.181 ************************************ 00:04:35.181 START TEST make 00:04:35.181 ************************************ 00:04:35.181 20:32:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:35.181 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:35.181 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:35.181 meson setup builddir \ 00:04:35.181 -Dwith-libaio=enabled \ 00:04:35.181 -Dwith-liburing=enabled \ 00:04:35.181 -Dwith-libvfn=disabled \ 00:04:35.181 -Dwith-spdk=disabled \ 00:04:35.181 -Dexamples=false \ 00:04:35.181 -Dtests=false \ 00:04:35.181 -Dtools=false && \ 00:04:35.181 meson compile -C builddir && \ 00:04:35.181 cd -) 00:04:35.181 make[1]: Nothing to be done for 'all'. 00:04:37.710 The Meson build system 00:04:37.710 Version: 1.5.0 00:04:37.710 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:37.710 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:37.710 Build type: native build 00:04:37.710 Project name: xnvme 00:04:37.710 Project version: 0.7.5 00:04:37.710 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:37.710 C linker for the host machine: cc ld.bfd 2.40-14 00:04:37.710 Host machine cpu family: x86_64 00:04:37.710 Host machine cpu: x86_64 00:04:37.710 Message: host_machine.system: linux 00:04:37.710 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:37.710 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:37.710 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:37.710 Run-time dependency threads found: YES 00:04:37.710 Has header "setupapi.h" : NO 00:04:37.710 Has header "linux/blkzoned.h" : YES 00:04:37.710 Has header "linux/blkzoned.h" : YES (cached) 00:04:37.710 Has header "libaio.h" : YES 00:04:37.710 Library aio found: YES 00:04:37.710 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:37.710 Run-time dependency liburing found: YES 2.2 00:04:37.710 Dependency libvfn skipped: feature with-libvfn disabled 00:04:37.710 Found CMake: /usr/bin/cmake (3.27.7) 00:04:37.710 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:37.710 Subproject spdk : skipped: feature with-spdk disabled 00:04:37.710 Run-time dependency appleframeworks found: NO (tried framework) 00:04:37.710 Run-time dependency appleframeworks found: NO (tried framework) 00:04:37.710 Library rt found: YES 00:04:37.710 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:37.710 Configuring xnvme_config.h using configuration 00:04:37.710 Configuring xnvme.spec using configuration 00:04:37.710 Run-time dependency bash-completion found: YES 2.11 00:04:37.710 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:37.710 Program cp found: YES (/usr/bin/cp) 00:04:37.710 Build targets in project: 3 00:04:37.710 00:04:37.710 xnvme 0.7.5 00:04:37.710 00:04:37.710 Subprojects 00:04:37.710 spdk : NO Feature 'with-spdk' disabled 00:04:37.710 00:04:37.710 User defined options 00:04:37.710 examples : false 00:04:37.710 tests : false 00:04:37.710 tools : false 00:04:37.710 with-libaio : enabled 00:04:37.710 with-liburing: enabled 00:04:37.710 with-libvfn : disabled 00:04:37.710 with-spdk : disabled 00:04:37.710 00:04:37.710 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:38.277 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:38.535 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:38.535 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:38.535 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:38.535 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:38.535 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:38.535 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:38.535 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:38.535 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:38.535 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:38.535 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:38.535 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:38.535 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:38.793 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:38.793 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:38.793 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:38.793 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:38.793 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:38.793 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:38.793 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:38.793 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:38.793 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:38.793 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:38.793 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:38.793 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:38.793 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:38.793 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:38.793 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:38.793 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:38.793 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:38.793 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:39.052 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:39.052 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:39.052 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:39.052 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:39.052 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:39.052 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:39.052 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:39.052 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:39.052 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:39.052 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:39.052 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:39.052 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:39.052 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:39.052 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:39.052 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:39.052 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:39.052 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:39.052 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:39.052 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:39.052 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:39.052 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:39.052 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:39.052 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:39.311 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:39.311 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:39.311 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:39.311 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:39.311 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:39.311 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:39.311 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:39.311 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:39.311 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:39.311 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:39.311 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:39.311 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:39.311 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:39.311 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:39.569 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:39.569 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:39.569 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:39.569 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:39.569 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:39.569 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:40.136 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:40.136 [75/76] Linking static target lib/libxnvme.a 00:04:40.136 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:40.136 INFO: autodetecting backend as ninja 00:04:40.136 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:40.136 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:50.106 The Meson build system 00:04:50.106 Version: 1.5.0 00:04:50.106 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:50.106 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:50.106 Build type: native build 00:04:50.106 Program cat found: YES (/usr/bin/cat) 00:04:50.106 Project name: DPDK 00:04:50.106 Project version: 24.03.0 00:04:50.106 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:50.106 C linker for the host machine: cc ld.bfd 2.40-14 00:04:50.106 Host machine cpu family: x86_64 00:04:50.106 Host machine cpu: x86_64 00:04:50.106 Message: ## Building in Developer Mode ## 00:04:50.106 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:50.106 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:50.106 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:50.106 Program python3 found: YES (/usr/bin/python3) 00:04:50.106 Program cat found: YES (/usr/bin/cat) 00:04:50.106 Compiler for C supports arguments -march=native: YES 00:04:50.106 Checking for size of "void *" : 8 00:04:50.106 Checking for size of "void *" : 8 (cached) 00:04:50.106 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:50.106 Library m found: YES 00:04:50.106 Library numa found: YES 00:04:50.106 Has header "numaif.h" : YES 00:04:50.106 Library fdt found: NO 00:04:50.106 Library execinfo found: NO 00:04:50.106 Has header "execinfo.h" : YES 00:04:50.106 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:50.106 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:50.106 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:50.106 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:50.106 Run-time dependency openssl found: YES 3.1.1 00:04:50.106 Run-time dependency libpcap found: YES 1.10.4 00:04:50.106 Has header "pcap.h" with dependency libpcap: YES 00:04:50.106 Compiler for C supports arguments -Wcast-qual: YES 00:04:50.106 Compiler for C supports arguments -Wdeprecated: YES 00:04:50.106 Compiler for C supports arguments -Wformat: YES 00:04:50.106 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:50.106 Compiler for C supports arguments -Wformat-security: NO 00:04:50.106 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:50.106 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:50.106 Compiler for C supports arguments -Wnested-externs: YES 00:04:50.106 Compiler for C supports arguments -Wold-style-definition: YES 00:04:50.106 Compiler for C supports arguments -Wpointer-arith: YES 00:04:50.106 Compiler for C supports arguments -Wsign-compare: YES 00:04:50.106 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:50.106 Compiler for C supports arguments -Wundef: YES 00:04:50.106 Compiler for C supports arguments -Wwrite-strings: YES 00:04:50.106 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:50.106 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:50.106 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:50.106 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:50.106 Program objdump found: YES (/usr/bin/objdump) 00:04:50.106 Compiler for C supports arguments -mavx512f: YES 00:04:50.106 Checking if "AVX512 checking" compiles: YES 00:04:50.106 Fetching value of define "__SSE4_2__" : 1 00:04:50.106 Fetching value of define "__AES__" : 1 00:04:50.106 Fetching value of define "__AVX__" : 1 00:04:50.106 Fetching value of define "__AVX2__" : 1 00:04:50.106 Fetching value of define "__AVX512BW__" : 1 00:04:50.106 Fetching value of define "__AVX512CD__" : 1 00:04:50.106 Fetching value of define "__AVX512DQ__" : 1 00:04:50.106 Fetching value of define "__AVX512F__" : 1 00:04:50.106 Fetching value of define "__AVX512VL__" : 1 00:04:50.106 Fetching value of define "__PCLMUL__" : 1 00:04:50.106 Fetching value of define "__RDRND__" : 1 00:04:50.106 Fetching value of define "__RDSEED__" : 1 00:04:50.106 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:50.106 Fetching value of define "__znver1__" : (undefined) 00:04:50.106 Fetching value of define "__znver2__" : (undefined) 00:04:50.106 Fetching value of define "__znver3__" : (undefined) 00:04:50.106 Fetching value of define "__znver4__" : (undefined) 00:04:50.106 Library asan found: YES 00:04:50.106 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:50.106 Message: lib/log: Defining dependency "log" 00:04:50.106 Message: lib/kvargs: Defining dependency "kvargs" 00:04:50.106 Message: lib/telemetry: Defining dependency "telemetry" 00:04:50.106 Library rt found: YES 00:04:50.106 Checking for function "getentropy" : NO 00:04:50.106 Message: lib/eal: Defining dependency "eal" 00:04:50.106 Message: lib/ring: Defining dependency "ring" 00:04:50.106 Message: lib/rcu: Defining dependency "rcu" 00:04:50.106 Message: lib/mempool: Defining dependency "mempool" 00:04:50.106 Message: lib/mbuf: Defining dependency "mbuf" 00:04:50.106 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:50.106 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:50.106 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:50.106 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:50.106 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:50.106 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:50.106 Compiler for C supports arguments -mpclmul: YES 00:04:50.106 Compiler for C supports arguments -maes: YES 00:04:50.106 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:50.106 Compiler for C supports arguments -mavx512bw: YES 00:04:50.106 Compiler for C supports arguments -mavx512dq: YES 00:04:50.106 Compiler for C supports arguments -mavx512vl: YES 00:04:50.106 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:50.106 Compiler for C supports arguments -mavx2: YES 00:04:50.106 Compiler for C supports arguments -mavx: YES 00:04:50.106 Message: lib/net: Defining dependency "net" 00:04:50.106 Message: lib/meter: Defining dependency "meter" 00:04:50.106 Message: lib/ethdev: Defining dependency "ethdev" 00:04:50.106 Message: lib/pci: Defining dependency "pci" 00:04:50.106 Message: lib/cmdline: Defining dependency "cmdline" 00:04:50.106 Message: lib/hash: Defining dependency "hash" 00:04:50.106 Message: lib/timer: Defining dependency "timer" 00:04:50.106 Message: lib/compressdev: Defining dependency "compressdev" 00:04:50.106 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:50.107 Message: lib/dmadev: Defining dependency "dmadev" 00:04:50.107 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:50.107 Message: lib/power: Defining dependency "power" 00:04:50.107 Message: lib/reorder: Defining dependency "reorder" 00:04:50.107 Message: lib/security: Defining dependency "security" 00:04:50.107 Has header "linux/userfaultfd.h" : YES 00:04:50.107 Has header "linux/vduse.h" : YES 00:04:50.107 Message: lib/vhost: Defining dependency "vhost" 00:04:50.107 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:50.107 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:50.107 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:50.107 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:50.107 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:50.107 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:50.107 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:50.107 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:50.107 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:50.107 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:50.107 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:50.107 Configuring doxy-api-html.conf using configuration 00:04:50.107 Configuring doxy-api-man.conf using configuration 00:04:50.107 Program mandb found: YES (/usr/bin/mandb) 00:04:50.107 Program sphinx-build found: NO 00:04:50.107 Configuring rte_build_config.h using configuration 00:04:50.107 Message: 00:04:50.107 ================= 00:04:50.107 Applications Enabled 00:04:50.107 ================= 00:04:50.107 00:04:50.107 apps: 00:04:50.107 00:04:50.107 00:04:50.107 Message: 00:04:50.107 ================= 00:04:50.107 Libraries Enabled 00:04:50.107 ================= 00:04:50.107 00:04:50.107 libs: 00:04:50.107 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:50.107 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:50.107 cryptodev, dmadev, power, reorder, security, vhost, 00:04:50.107 00:04:50.107 Message: 00:04:50.107 =============== 00:04:50.107 Drivers Enabled 00:04:50.107 =============== 00:04:50.107 00:04:50.107 common: 00:04:50.107 00:04:50.107 bus: 00:04:50.107 pci, vdev, 00:04:50.107 mempool: 00:04:50.107 ring, 00:04:50.107 dma: 00:04:50.107 00:04:50.107 net: 00:04:50.107 00:04:50.107 crypto: 00:04:50.107 00:04:50.107 compress: 00:04:50.107 00:04:50.107 vdpa: 00:04:50.107 00:04:50.107 00:04:50.107 Message: 00:04:50.107 ================= 00:04:50.107 Content Skipped 00:04:50.107 ================= 00:04:50.107 00:04:50.107 apps: 00:04:50.107 dumpcap: explicitly disabled via build config 00:04:50.107 graph: explicitly disabled via build config 00:04:50.107 pdump: explicitly disabled via build config 00:04:50.107 proc-info: explicitly disabled via build config 00:04:50.107 test-acl: explicitly disabled via build config 00:04:50.107 test-bbdev: explicitly disabled via build config 00:04:50.107 test-cmdline: explicitly disabled via build config 00:04:50.107 test-compress-perf: explicitly disabled via build config 00:04:50.107 test-crypto-perf: explicitly disabled via build config 00:04:50.107 test-dma-perf: explicitly disabled via build config 00:04:50.107 test-eventdev: explicitly disabled via build config 00:04:50.107 test-fib: explicitly disabled via build config 00:04:50.107 test-flow-perf: explicitly disabled via build config 00:04:50.107 test-gpudev: explicitly disabled via build config 00:04:50.107 test-mldev: explicitly disabled via build config 00:04:50.107 test-pipeline: explicitly disabled via build config 00:04:50.107 test-pmd: explicitly disabled via build config 00:04:50.107 test-regex: explicitly disabled via build config 00:04:50.107 test-sad: explicitly disabled via build config 00:04:50.107 test-security-perf: explicitly disabled via build config 00:04:50.107 00:04:50.107 libs: 00:04:50.107 argparse: explicitly disabled via build config 00:04:50.107 metrics: explicitly disabled via build config 00:04:50.107 acl: explicitly disabled via build config 00:04:50.107 bbdev: explicitly disabled via build config 00:04:50.107 bitratestats: explicitly disabled via build config 00:04:50.107 bpf: explicitly disabled via build config 00:04:50.107 cfgfile: explicitly disabled via build config 00:04:50.107 distributor: explicitly disabled via build config 00:04:50.107 efd: explicitly disabled via build config 00:04:50.107 eventdev: explicitly disabled via build config 00:04:50.107 dispatcher: explicitly disabled via build config 00:04:50.107 gpudev: explicitly disabled via build config 00:04:50.107 gro: explicitly disabled via build config 00:04:50.107 gso: explicitly disabled via build config 00:04:50.107 ip_frag: explicitly disabled via build config 00:04:50.107 jobstats: explicitly disabled via build config 00:04:50.107 latencystats: explicitly disabled via build config 00:04:50.107 lpm: explicitly disabled via build config 00:04:50.107 member: explicitly disabled via build config 00:04:50.107 pcapng: explicitly disabled via build config 00:04:50.107 rawdev: explicitly disabled via build config 00:04:50.107 regexdev: explicitly disabled via build config 00:04:50.107 mldev: explicitly disabled via build config 00:04:50.107 rib: explicitly disabled via build config 00:04:50.107 sched: explicitly disabled via build config 00:04:50.107 stack: explicitly disabled via build config 00:04:50.107 ipsec: explicitly disabled via build config 00:04:50.107 pdcp: explicitly disabled via build config 00:04:50.107 fib: explicitly disabled via build config 00:04:50.107 port: explicitly disabled via build config 00:04:50.107 pdump: explicitly disabled via build config 00:04:50.107 table: explicitly disabled via build config 00:04:50.107 pipeline: explicitly disabled via build config 00:04:50.107 graph: explicitly disabled via build config 00:04:50.107 node: explicitly disabled via build config 00:04:50.107 00:04:50.107 drivers: 00:04:50.107 common/cpt: not in enabled drivers build config 00:04:50.107 common/dpaax: not in enabled drivers build config 00:04:50.107 common/iavf: not in enabled drivers build config 00:04:50.107 common/idpf: not in enabled drivers build config 00:04:50.107 common/ionic: not in enabled drivers build config 00:04:50.107 common/mvep: not in enabled drivers build config 00:04:50.107 common/octeontx: not in enabled drivers build config 00:04:50.107 bus/auxiliary: not in enabled drivers build config 00:04:50.107 bus/cdx: not in enabled drivers build config 00:04:50.107 bus/dpaa: not in enabled drivers build config 00:04:50.107 bus/fslmc: not in enabled drivers build config 00:04:50.107 bus/ifpga: not in enabled drivers build config 00:04:50.107 bus/platform: not in enabled drivers build config 00:04:50.107 bus/uacce: not in enabled drivers build config 00:04:50.107 bus/vmbus: not in enabled drivers build config 00:04:50.107 common/cnxk: not in enabled drivers build config 00:04:50.107 common/mlx5: not in enabled drivers build config 00:04:50.107 common/nfp: not in enabled drivers build config 00:04:50.107 common/nitrox: not in enabled drivers build config 00:04:50.107 common/qat: not in enabled drivers build config 00:04:50.107 common/sfc_efx: not in enabled drivers build config 00:04:50.107 mempool/bucket: not in enabled drivers build config 00:04:50.107 mempool/cnxk: not in enabled drivers build config 00:04:50.107 mempool/dpaa: not in enabled drivers build config 00:04:50.107 mempool/dpaa2: not in enabled drivers build config 00:04:50.107 mempool/octeontx: not in enabled drivers build config 00:04:50.107 mempool/stack: not in enabled drivers build config 00:04:50.107 dma/cnxk: not in enabled drivers build config 00:04:50.107 dma/dpaa: not in enabled drivers build config 00:04:50.107 dma/dpaa2: not in enabled drivers build config 00:04:50.107 dma/hisilicon: not in enabled drivers build config 00:04:50.107 dma/idxd: not in enabled drivers build config 00:04:50.107 dma/ioat: not in enabled drivers build config 00:04:50.107 dma/skeleton: not in enabled drivers build config 00:04:50.107 net/af_packet: not in enabled drivers build config 00:04:50.107 net/af_xdp: not in enabled drivers build config 00:04:50.107 net/ark: not in enabled drivers build config 00:04:50.107 net/atlantic: not in enabled drivers build config 00:04:50.107 net/avp: not in enabled drivers build config 00:04:50.107 net/axgbe: not in enabled drivers build config 00:04:50.107 net/bnx2x: not in enabled drivers build config 00:04:50.107 net/bnxt: not in enabled drivers build config 00:04:50.107 net/bonding: not in enabled drivers build config 00:04:50.107 net/cnxk: not in enabled drivers build config 00:04:50.107 net/cpfl: not in enabled drivers build config 00:04:50.107 net/cxgbe: not in enabled drivers build config 00:04:50.107 net/dpaa: not in enabled drivers build config 00:04:50.107 net/dpaa2: not in enabled drivers build config 00:04:50.107 net/e1000: not in enabled drivers build config 00:04:50.107 net/ena: not in enabled drivers build config 00:04:50.107 net/enetc: not in enabled drivers build config 00:04:50.107 net/enetfec: not in enabled drivers build config 00:04:50.107 net/enic: not in enabled drivers build config 00:04:50.107 net/failsafe: not in enabled drivers build config 00:04:50.107 net/fm10k: not in enabled drivers build config 00:04:50.107 net/gve: not in enabled drivers build config 00:04:50.107 net/hinic: not in enabled drivers build config 00:04:50.107 net/hns3: not in enabled drivers build config 00:04:50.107 net/i40e: not in enabled drivers build config 00:04:50.107 net/iavf: not in enabled drivers build config 00:04:50.107 net/ice: not in enabled drivers build config 00:04:50.107 net/idpf: not in enabled drivers build config 00:04:50.107 net/igc: not in enabled drivers build config 00:04:50.107 net/ionic: not in enabled drivers build config 00:04:50.107 net/ipn3ke: not in enabled drivers build config 00:04:50.107 net/ixgbe: not in enabled drivers build config 00:04:50.107 net/mana: not in enabled drivers build config 00:04:50.107 net/memif: not in enabled drivers build config 00:04:50.107 net/mlx4: not in enabled drivers build config 00:04:50.107 net/mlx5: not in enabled drivers build config 00:04:50.107 net/mvneta: not in enabled drivers build config 00:04:50.107 net/mvpp2: not in enabled drivers build config 00:04:50.108 net/netvsc: not in enabled drivers build config 00:04:50.108 net/nfb: not in enabled drivers build config 00:04:50.108 net/nfp: not in enabled drivers build config 00:04:50.108 net/ngbe: not in enabled drivers build config 00:04:50.108 net/null: not in enabled drivers build config 00:04:50.108 net/octeontx: not in enabled drivers build config 00:04:50.108 net/octeon_ep: not in enabled drivers build config 00:04:50.108 net/pcap: not in enabled drivers build config 00:04:50.108 net/pfe: not in enabled drivers build config 00:04:50.108 net/qede: not in enabled drivers build config 00:04:50.108 net/ring: not in enabled drivers build config 00:04:50.108 net/sfc: not in enabled drivers build config 00:04:50.108 net/softnic: not in enabled drivers build config 00:04:50.108 net/tap: not in enabled drivers build config 00:04:50.108 net/thunderx: not in enabled drivers build config 00:04:50.108 net/txgbe: not in enabled drivers build config 00:04:50.108 net/vdev_netvsc: not in enabled drivers build config 00:04:50.108 net/vhost: not in enabled drivers build config 00:04:50.108 net/virtio: not in enabled drivers build config 00:04:50.108 net/vmxnet3: not in enabled drivers build config 00:04:50.108 raw/*: missing internal dependency, "rawdev" 00:04:50.108 crypto/armv8: not in enabled drivers build config 00:04:50.108 crypto/bcmfs: not in enabled drivers build config 00:04:50.108 crypto/caam_jr: not in enabled drivers build config 00:04:50.108 crypto/ccp: not in enabled drivers build config 00:04:50.108 crypto/cnxk: not in enabled drivers build config 00:04:50.108 crypto/dpaa_sec: not in enabled drivers build config 00:04:50.108 crypto/dpaa2_sec: not in enabled drivers build config 00:04:50.108 crypto/ipsec_mb: not in enabled drivers build config 00:04:50.108 crypto/mlx5: not in enabled drivers build config 00:04:50.108 crypto/mvsam: not in enabled drivers build config 00:04:50.108 crypto/nitrox: not in enabled drivers build config 00:04:50.108 crypto/null: not in enabled drivers build config 00:04:50.108 crypto/octeontx: not in enabled drivers build config 00:04:50.108 crypto/openssl: not in enabled drivers build config 00:04:50.108 crypto/scheduler: not in enabled drivers build config 00:04:50.108 crypto/uadk: not in enabled drivers build config 00:04:50.108 crypto/virtio: not in enabled drivers build config 00:04:50.108 compress/isal: not in enabled drivers build config 00:04:50.108 compress/mlx5: not in enabled drivers build config 00:04:50.108 compress/nitrox: not in enabled drivers build config 00:04:50.108 compress/octeontx: not in enabled drivers build config 00:04:50.108 compress/zlib: not in enabled drivers build config 00:04:50.108 regex/*: missing internal dependency, "regexdev" 00:04:50.108 ml/*: missing internal dependency, "mldev" 00:04:50.108 vdpa/ifc: not in enabled drivers build config 00:04:50.108 vdpa/mlx5: not in enabled drivers build config 00:04:50.108 vdpa/nfp: not in enabled drivers build config 00:04:50.108 vdpa/sfc: not in enabled drivers build config 00:04:50.108 event/*: missing internal dependency, "eventdev" 00:04:50.108 baseband/*: missing internal dependency, "bbdev" 00:04:50.108 gpu/*: missing internal dependency, "gpudev" 00:04:50.108 00:04:50.108 00:04:50.108 Build targets in project: 85 00:04:50.108 00:04:50.108 DPDK 24.03.0 00:04:50.108 00:04:50.108 User defined options 00:04:50.108 buildtype : debug 00:04:50.108 default_library : shared 00:04:50.108 libdir : lib 00:04:50.108 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:50.108 b_sanitize : address 00:04:50.108 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:50.108 c_link_args : 00:04:50.108 cpu_instruction_set: native 00:04:50.108 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:50.108 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:50.108 enable_docs : false 00:04:50.108 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:50.108 enable_kmods : false 00:04:50.108 max_lcores : 128 00:04:50.108 tests : false 00:04:50.108 00:04:50.108 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:50.108 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:50.108 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:50.108 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:50.108 [3/268] Linking static target lib/librte_kvargs.a 00:04:50.108 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:50.108 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:50.108 [6/268] Linking static target lib/librte_log.a 00:04:50.108 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:50.367 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:50.367 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:50.367 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:50.367 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:50.368 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:50.368 [13/268] Linking static target lib/librte_telemetry.a 00:04:50.368 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:50.368 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.626 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:50.626 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:50.626 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:50.885 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:50.885 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:51.144 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.144 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:51.144 [23/268] Linking target lib/librte_log.so.24.1 00:04:51.144 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:51.144 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:51.401 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:51.401 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.401 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:51.401 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:51.401 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:51.659 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:51.659 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:51.659 [33/268] Linking target lib/librte_kvargs.so.24.1 00:04:51.659 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:51.659 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:51.918 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:51.918 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:51.918 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:51.918 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:51.918 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:51.918 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:51.918 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:51.918 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:51.918 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:52.224 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:52.224 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:52.499 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:52.499 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:52.499 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:52.499 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:52.758 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:52.758 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:52.758 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:53.016 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:53.016 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:53.016 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:53.016 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:53.016 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:53.274 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:53.274 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:53.274 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:53.274 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:53.532 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:53.532 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:53.532 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:53.790 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:53.790 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:53.790 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:54.049 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:54.049 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:54.049 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:54.324 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:54.324 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:54.324 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:54.324 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:54.324 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:54.324 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:54.324 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:54.324 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:54.583 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:54.583 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:54.583 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:54.583 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:54.583 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:54.843 [85/268] Linking static target lib/librte_eal.a 00:04:54.843 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:54.843 [87/268] Linking static target lib/librte_ring.a 00:04:55.101 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:55.101 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:55.101 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:55.101 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:55.101 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:55.359 [93/268] Linking static target lib/librte_mempool.a 00:04:55.359 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:55.679 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:55.679 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:55.679 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:55.679 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:55.679 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:55.679 [100/268] Linking static target lib/librte_rcu.a 00:04:55.938 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:55.938 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:55.938 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:55.938 [104/268] Linking static target lib/librte_mbuf.a 00:04:56.196 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:56.196 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:56.196 [107/268] Linking static target lib/librte_meter.a 00:04:56.196 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:56.196 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:56.455 [110/268] Linking static target lib/librte_net.a 00:04:56.455 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.455 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:56.714 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:56.714 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.715 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:56.715 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.715 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.973 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:57.231 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.231 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:57.231 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:57.489 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:57.748 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:58.006 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:58.006 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:58.006 [126/268] Linking static target lib/librte_pci.a 00:04:58.264 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:58.264 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:58.264 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:58.264 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:58.264 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:58.522 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:58.522 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:58.522 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:58.522 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:58.522 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.522 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:58.780 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:58.780 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:58.780 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:58.780 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:58.780 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:58.780 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:58.780 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:58.780 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:58.780 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:59.039 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:59.039 [148/268] Linking static target lib/librte_cmdline.a 00:04:59.297 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:59.297 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:59.297 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:59.297 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:59.554 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:59.554 [154/268] Linking static target lib/librte_timer.a 00:04:59.554 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:59.812 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:00.070 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:00.070 [158/268] Linking static target lib/librte_compressdev.a 00:05:00.070 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:00.071 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:00.071 [161/268] Linking static target lib/librte_ethdev.a 00:05:00.328 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:00.328 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.328 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:00.329 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:00.329 [166/268] Linking static target lib/librte_dmadev.a 00:05:00.587 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:00.587 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:00.587 [169/268] Linking static target lib/librte_hash.a 00:05:00.845 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:00.845 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:00.845 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:01.103 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.103 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:01.103 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.361 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:01.361 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:01.361 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:01.620 [179/268] Linking static target lib/librte_cryptodev.a 00:05:01.620 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:01.620 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:01.620 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.620 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:01.879 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:01.879 [185/268] Linking static target lib/librte_power.a 00:05:01.879 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.138 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:02.411 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:02.411 [189/268] Linking static target lib/librte_reorder.a 00:05:02.411 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:02.411 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:02.411 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:02.411 [193/268] Linking static target lib/librte_security.a 00:05:03.001 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.001 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:03.259 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.517 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.517 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:03.517 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:03.775 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:03.775 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:03.775 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:04.358 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:04.358 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:04.358 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:04.358 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:04.358 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:04.358 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:04.358 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.359 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:04.359 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:04.619 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:04.619 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:04.619 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:04.619 [215/268] Linking static target drivers/librte_bus_vdev.a 00:05:04.619 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:04.619 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:04.877 [218/268] Linking static target drivers/librte_bus_pci.a 00:05:04.877 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:05.136 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:05.136 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:05.136 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.136 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:05.394 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:05.394 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:05.394 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:05.394 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.782 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:07.717 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.717 [230/268] Linking target lib/librte_eal.so.24.1 00:05:07.975 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:07.975 [232/268] Linking target lib/librte_ring.so.24.1 00:05:07.975 [233/268] Linking target lib/librte_timer.so.24.1 00:05:07.975 [234/268] Linking target lib/librte_meter.so.24.1 00:05:07.975 [235/268] Linking target lib/librte_dmadev.so.24.1 00:05:07.975 [236/268] Linking target lib/librte_pci.so.24.1 00:05:07.975 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:07.975 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:07.975 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:08.233 [240/268] Linking target lib/librte_rcu.so.24.1 00:05:08.233 [241/268] Linking target lib/librte_mempool.so.24.1 00:05:08.233 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:08.233 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:08.233 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:08.233 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:08.233 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:08.233 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:08.233 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:08.233 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:08.492 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:08.492 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:08.492 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:08.492 [253/268] Linking target lib/librte_net.so.24.1 00:05:08.492 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:08.750 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:08.750 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:08.750 [257/268] Linking target lib/librte_hash.so.24.1 00:05:08.750 [258/268] Linking target lib/librte_security.so.24.1 00:05:08.750 [259/268] Linking target lib/librte_cmdline.so.24.1 00:05:09.007 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.007 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:09.007 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:09.265 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:09.265 [264/268] Linking target lib/librte_power.so.24.1 00:05:11.808 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:11.808 [266/268] Linking static target lib/librte_vhost.a 00:05:13.188 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.444 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:13.444 INFO: autodetecting backend as ninja 00:05:13.444 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:35.455 CC lib/ut_mock/mock.o 00:05:35.455 CC lib/log/log.o 00:05:35.455 CC lib/log/log_flags.o 00:05:35.455 CC lib/log/log_deprecated.o 00:05:35.455 CC lib/ut/ut.o 00:05:35.455 LIB libspdk_log.a 00:05:35.455 LIB libspdk_ut_mock.a 00:05:35.455 LIB libspdk_ut.a 00:05:35.455 SO libspdk_log.so.7.1 00:05:35.455 SO libspdk_ut.so.2.0 00:05:35.455 SO libspdk_ut_mock.so.6.0 00:05:35.455 SYMLINK libspdk_ut_mock.so 00:05:35.455 SYMLINK libspdk_ut.so 00:05:35.455 SYMLINK libspdk_log.so 00:05:35.455 CXX lib/trace_parser/trace.o 00:05:35.455 CC lib/util/bit_array.o 00:05:35.455 CC lib/util/base64.o 00:05:35.455 CC lib/util/crc32.o 00:05:35.455 CC lib/util/crc16.o 00:05:35.455 CC lib/util/crc32c.o 00:05:35.455 CC lib/util/cpuset.o 00:05:35.455 CC lib/ioat/ioat.o 00:05:35.455 CC lib/dma/dma.o 00:05:35.455 CC lib/vfio_user/host/vfio_user_pci.o 00:05:35.455 CC lib/vfio_user/host/vfio_user.o 00:05:35.455 CC lib/util/crc32_ieee.o 00:05:35.455 CC lib/util/crc64.o 00:05:35.455 CC lib/util/dif.o 00:05:35.455 CC lib/util/fd.o 00:05:35.455 LIB libspdk_dma.a 00:05:35.455 CC lib/util/fd_group.o 00:05:35.455 SO libspdk_dma.so.5.0 00:05:35.455 CC lib/util/file.o 00:05:35.455 CC lib/util/hexlify.o 00:05:35.455 CC lib/util/iov.o 00:05:35.455 LIB libspdk_ioat.a 00:05:35.455 SYMLINK libspdk_dma.so 00:05:35.455 LIB libspdk_vfio_user.a 00:05:35.455 CC lib/util/math.o 00:05:35.455 SO libspdk_ioat.so.7.0 00:05:35.455 CC lib/util/net.o 00:05:35.455 SO libspdk_vfio_user.so.5.0 00:05:35.455 SYMLINK libspdk_ioat.so 00:05:35.455 CC lib/util/pipe.o 00:05:35.455 CC lib/util/strerror_tls.o 00:05:35.455 CC lib/util/string.o 00:05:35.455 SYMLINK libspdk_vfio_user.so 00:05:35.455 CC lib/util/uuid.o 00:05:35.455 CC lib/util/xor.o 00:05:35.455 CC lib/util/zipf.o 00:05:35.455 CC lib/util/md5.o 00:05:35.714 LIB libspdk_util.a 00:05:35.973 SO libspdk_util.so.10.1 00:05:35.973 LIB libspdk_trace_parser.a 00:05:35.973 SO libspdk_trace_parser.so.6.0 00:05:35.973 SYMLINK libspdk_util.so 00:05:36.233 CC lib/conf/conf.o 00:05:36.233 CC lib/vmd/vmd.o 00:05:36.233 CC lib/vmd/led.o 00:05:36.233 SYMLINK libspdk_trace_parser.so 00:05:36.233 CC lib/rdma_utils/rdma_utils.o 00:05:36.233 CC lib/idxd/idxd.o 00:05:36.233 CC lib/idxd/idxd_user.o 00:05:36.233 CC lib/json/json_parse.o 00:05:36.233 CC lib/env_dpdk/env.o 00:05:36.233 CC lib/idxd/idxd_kernel.o 00:05:36.233 CC lib/env_dpdk/memory.o 00:05:36.491 CC lib/json/json_util.o 00:05:36.491 CC lib/json/json_write.o 00:05:36.749 LIB libspdk_conf.a 00:05:36.749 CC lib/env_dpdk/pci.o 00:05:36.749 CC lib/env_dpdk/init.o 00:05:36.749 SO libspdk_conf.so.6.0 00:05:36.749 LIB libspdk_rdma_utils.a 00:05:36.750 SO libspdk_rdma_utils.so.1.0 00:05:36.750 SYMLINK libspdk_conf.so 00:05:36.750 CC lib/env_dpdk/threads.o 00:05:36.750 SYMLINK libspdk_rdma_utils.so 00:05:36.750 CC lib/env_dpdk/pci_ioat.o 00:05:36.750 CC lib/env_dpdk/pci_virtio.o 00:05:37.113 CC lib/env_dpdk/pci_vmd.o 00:05:37.113 CC lib/env_dpdk/pci_idxd.o 00:05:37.113 LIB libspdk_json.a 00:05:37.113 SO libspdk_json.so.6.0 00:05:37.113 CC lib/env_dpdk/pci_event.o 00:05:37.113 CC lib/env_dpdk/sigbus_handler.o 00:05:37.113 CC lib/env_dpdk/pci_dpdk.o 00:05:37.113 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:37.113 LIB libspdk_idxd.a 00:05:37.113 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:37.113 SYMLINK libspdk_json.so 00:05:37.113 CC lib/rdma_provider/common.o 00:05:37.113 SO libspdk_idxd.so.12.1 00:05:37.113 LIB libspdk_vmd.a 00:05:37.113 SO libspdk_vmd.so.6.0 00:05:37.370 SYMLINK libspdk_idxd.so 00:05:37.370 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:37.370 SYMLINK libspdk_vmd.so 00:05:37.370 CC lib/jsonrpc/jsonrpc_server.o 00:05:37.370 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:37.370 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:37.370 CC lib/jsonrpc/jsonrpc_client.o 00:05:37.628 LIB libspdk_rdma_provider.a 00:05:37.628 SO libspdk_rdma_provider.so.7.0 00:05:37.628 SYMLINK libspdk_rdma_provider.so 00:05:37.628 LIB libspdk_jsonrpc.a 00:05:37.628 SO libspdk_jsonrpc.so.6.0 00:05:37.886 SYMLINK libspdk_jsonrpc.so 00:05:38.145 CC lib/rpc/rpc.o 00:05:38.404 LIB libspdk_env_dpdk.a 00:05:38.404 LIB libspdk_rpc.a 00:05:38.404 SO libspdk_env_dpdk.so.15.1 00:05:38.663 SO libspdk_rpc.so.6.0 00:05:38.664 SYMLINK libspdk_rpc.so 00:05:38.664 SYMLINK libspdk_env_dpdk.so 00:05:38.922 CC lib/notify/notify.o 00:05:38.922 CC lib/notify/notify_rpc.o 00:05:38.922 CC lib/trace/trace.o 00:05:38.922 CC lib/trace/trace_flags.o 00:05:38.922 CC lib/trace/trace_rpc.o 00:05:38.922 CC lib/keyring/keyring.o 00:05:38.922 CC lib/keyring/keyring_rpc.o 00:05:39.181 LIB libspdk_notify.a 00:05:39.181 SO libspdk_notify.so.6.0 00:05:39.181 LIB libspdk_keyring.a 00:05:39.181 SYMLINK libspdk_notify.so 00:05:39.181 LIB libspdk_trace.a 00:05:39.441 SO libspdk_keyring.so.2.0 00:05:39.441 SO libspdk_trace.so.11.0 00:05:39.441 SYMLINK libspdk_keyring.so 00:05:39.441 SYMLINK libspdk_trace.so 00:05:39.700 CC lib/thread/thread.o 00:05:39.700 CC lib/thread/iobuf.o 00:05:39.700 CC lib/sock/sock_rpc.o 00:05:39.700 CC lib/sock/sock.o 00:05:40.634 LIB libspdk_sock.a 00:05:40.634 SO libspdk_sock.so.10.0 00:05:40.634 SYMLINK libspdk_sock.so 00:05:40.893 CC lib/nvme/nvme_ctrlr.o 00:05:40.893 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:40.893 CC lib/nvme/nvme_pcie_common.o 00:05:40.893 CC lib/nvme/nvme_fabric.o 00:05:40.893 CC lib/nvme/nvme_ns_cmd.o 00:05:40.893 CC lib/nvme/nvme_pcie.o 00:05:40.893 CC lib/nvme/nvme_qpair.o 00:05:40.893 CC lib/nvme/nvme.o 00:05:40.893 CC lib/nvme/nvme_ns.o 00:05:41.836 CC lib/nvme/nvme_quirks.o 00:05:41.836 CC lib/nvme/nvme_transport.o 00:05:41.836 CC lib/nvme/nvme_discovery.o 00:05:41.836 LIB libspdk_thread.a 00:05:41.836 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:42.094 SO libspdk_thread.so.11.0 00:05:42.094 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:42.094 SYMLINK libspdk_thread.so 00:05:42.094 CC lib/nvme/nvme_tcp.o 00:05:42.094 CC lib/nvme/nvme_opal.o 00:05:42.094 CC lib/nvme/nvme_io_msg.o 00:05:42.354 CC lib/nvme/nvme_poll_group.o 00:05:42.354 CC lib/nvme/nvme_zns.o 00:05:42.684 CC lib/nvme/nvme_stubs.o 00:05:42.684 CC lib/nvme/nvme_auth.o 00:05:42.684 CC lib/nvme/nvme_cuse.o 00:05:42.964 CC lib/nvme/nvme_rdma.o 00:05:43.223 CC lib/accel/accel.o 00:05:43.223 CC lib/blob/blobstore.o 00:05:43.223 CC lib/blob/request.o 00:05:43.223 CC lib/blob/zeroes.o 00:05:43.223 CC lib/blob/blob_bs_dev.o 00:05:43.482 CC lib/accel/accel_rpc.o 00:05:43.741 CC lib/accel/accel_sw.o 00:05:43.741 CC lib/init/json_config.o 00:05:43.741 CC lib/init/subsystem.o 00:05:44.000 CC lib/init/subsystem_rpc.o 00:05:44.000 CC lib/virtio/virtio.o 00:05:44.000 CC lib/init/rpc.o 00:05:44.000 CC lib/virtio/virtio_vhost_user.o 00:05:44.000 CC lib/fsdev/fsdev.o 00:05:44.259 CC lib/fsdev/fsdev_io.o 00:05:44.259 CC lib/fsdev/fsdev_rpc.o 00:05:44.259 CC lib/virtio/virtio_vfio_user.o 00:05:44.259 LIB libspdk_init.a 00:05:44.259 SO libspdk_init.so.6.0 00:05:44.259 CC lib/virtio/virtio_pci.o 00:05:44.517 SYMLINK libspdk_init.so 00:05:44.517 LIB libspdk_nvme.a 00:05:44.517 LIB libspdk_accel.a 00:05:44.517 CC lib/event/app.o 00:05:44.517 CC lib/event/scheduler_static.o 00:05:44.517 CC lib/event/app_rpc.o 00:05:44.517 CC lib/event/reactor.o 00:05:44.517 CC lib/event/log_rpc.o 00:05:44.777 SO libspdk_accel.so.16.0 00:05:44.777 LIB libspdk_virtio.a 00:05:44.777 SYMLINK libspdk_accel.so 00:05:44.777 SO libspdk_virtio.so.7.0 00:05:44.777 SO libspdk_nvme.so.15.0 00:05:45.037 SYMLINK libspdk_virtio.so 00:05:45.037 LIB libspdk_fsdev.a 00:05:45.037 CC lib/bdev/bdev.o 00:05:45.037 CC lib/bdev/bdev_rpc.o 00:05:45.037 CC lib/bdev/bdev_zone.o 00:05:45.037 CC lib/bdev/part.o 00:05:45.037 CC lib/bdev/scsi_nvme.o 00:05:45.037 SO libspdk_fsdev.so.2.0 00:05:45.296 SYMLINK libspdk_nvme.so 00:05:45.296 SYMLINK libspdk_fsdev.so 00:05:45.296 LIB libspdk_event.a 00:05:45.554 SO libspdk_event.so.14.0 00:05:45.554 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:45.554 SYMLINK libspdk_event.so 00:05:46.492 LIB libspdk_fuse_dispatcher.a 00:05:46.492 SO libspdk_fuse_dispatcher.so.1.0 00:05:46.492 SYMLINK libspdk_fuse_dispatcher.so 00:05:47.868 LIB libspdk_blob.a 00:05:47.868 SO libspdk_blob.so.12.0 00:05:47.868 SYMLINK libspdk_blob.so 00:05:48.127 CC lib/lvol/lvol.o 00:05:48.127 CC lib/blobfs/blobfs.o 00:05:48.127 CC lib/blobfs/tree.o 00:05:48.694 LIB libspdk_bdev.a 00:05:48.694 SO libspdk_bdev.so.17.0 00:05:48.953 SYMLINK libspdk_bdev.so 00:05:49.210 CC lib/nvmf/ctrlr.o 00:05:49.211 CC lib/nvmf/ctrlr_discovery.o 00:05:49.211 CC lib/ublk/ublk.o 00:05:49.211 CC lib/nvmf/ctrlr_bdev.o 00:05:49.211 CC lib/ublk/ublk_rpc.o 00:05:49.211 CC lib/scsi/dev.o 00:05:49.211 CC lib/ftl/ftl_core.o 00:05:49.211 CC lib/nbd/nbd.o 00:05:49.469 LIB libspdk_blobfs.a 00:05:49.469 LIB libspdk_lvol.a 00:05:49.469 SO libspdk_blobfs.so.11.0 00:05:49.469 SO libspdk_lvol.so.11.0 00:05:49.469 CC lib/nbd/nbd_rpc.o 00:05:49.469 SYMLINK libspdk_lvol.so 00:05:49.469 SYMLINK libspdk_blobfs.so 00:05:49.469 CC lib/ftl/ftl_init.o 00:05:49.469 CC lib/ftl/ftl_layout.o 00:05:49.469 CC lib/scsi/lun.o 00:05:49.727 CC lib/ftl/ftl_debug.o 00:05:49.727 CC lib/nvmf/subsystem.o 00:05:49.727 CC lib/ftl/ftl_io.o 00:05:49.727 LIB libspdk_nbd.a 00:05:49.727 CC lib/ftl/ftl_sb.o 00:05:49.727 SO libspdk_nbd.so.7.0 00:05:49.988 SYMLINK libspdk_nbd.so 00:05:49.988 CC lib/ftl/ftl_l2p.o 00:05:49.988 CC lib/ftl/ftl_l2p_flat.o 00:05:49.988 CC lib/scsi/port.o 00:05:49.988 CC lib/ftl/ftl_nv_cache.o 00:05:49.988 LIB libspdk_ublk.a 00:05:49.988 SO libspdk_ublk.so.3.0 00:05:49.988 CC lib/ftl/ftl_band.o 00:05:49.988 CC lib/nvmf/nvmf.o 00:05:49.988 CC lib/nvmf/nvmf_rpc.o 00:05:50.246 SYMLINK libspdk_ublk.so 00:05:50.246 CC lib/ftl/ftl_band_ops.o 00:05:50.246 CC lib/scsi/scsi.o 00:05:50.246 CC lib/ftl/ftl_writer.o 00:05:50.246 CC lib/nvmf/transport.o 00:05:50.246 CC lib/scsi/scsi_bdev.o 00:05:50.504 CC lib/scsi/scsi_pr.o 00:05:50.504 CC lib/scsi/scsi_rpc.o 00:05:50.504 CC lib/ftl/ftl_rq.o 00:05:50.762 CC lib/scsi/task.o 00:05:50.762 CC lib/nvmf/tcp.o 00:05:51.020 CC lib/nvmf/stubs.o 00:05:51.020 CC lib/ftl/ftl_reloc.o 00:05:51.020 LIB libspdk_scsi.a 00:05:51.020 SO libspdk_scsi.so.9.0 00:05:51.020 CC lib/ftl/ftl_l2p_cache.o 00:05:51.278 SYMLINK libspdk_scsi.so 00:05:51.278 CC lib/nvmf/mdns_server.o 00:05:51.278 CC lib/nvmf/rdma.o 00:05:51.537 CC lib/nvmf/auth.o 00:05:51.537 CC lib/iscsi/conn.o 00:05:51.537 CC lib/iscsi/init_grp.o 00:05:51.537 CC lib/iscsi/iscsi.o 00:05:51.537 CC lib/vhost/vhost.o 00:05:51.795 CC lib/vhost/vhost_rpc.o 00:05:51.795 CC lib/iscsi/param.o 00:05:51.795 CC lib/iscsi/portal_grp.o 00:05:51.795 CC lib/ftl/ftl_p2l.o 00:05:52.362 CC lib/ftl/ftl_p2l_log.o 00:05:52.362 CC lib/ftl/mngt/ftl_mngt.o 00:05:52.362 CC lib/vhost/vhost_scsi.o 00:05:52.362 CC lib/vhost/vhost_blk.o 00:05:52.362 CC lib/vhost/rte_vhost_user.o 00:05:52.620 CC lib/iscsi/tgt_node.o 00:05:52.620 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:52.620 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:52.620 CC lib/iscsi/iscsi_subsystem.o 00:05:52.877 CC lib/iscsi/iscsi_rpc.o 00:05:52.877 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:52.877 CC lib/iscsi/task.o 00:05:53.136 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:53.136 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:53.136 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:53.136 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:53.395 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:53.395 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:53.395 LIB libspdk_iscsi.a 00:05:53.395 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:53.395 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:53.395 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:53.395 CC lib/ftl/utils/ftl_conf.o 00:05:53.395 SO libspdk_iscsi.so.8.0 00:05:53.652 CC lib/ftl/utils/ftl_md.o 00:05:53.652 CC lib/ftl/utils/ftl_mempool.o 00:05:53.652 LIB libspdk_vhost.a 00:05:53.652 CC lib/ftl/utils/ftl_bitmap.o 00:05:53.652 CC lib/ftl/utils/ftl_property.o 00:05:53.652 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:53.652 SYMLINK libspdk_iscsi.so 00:05:53.652 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:53.652 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:53.908 SO libspdk_vhost.so.8.0 00:05:53.908 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:53.908 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:53.908 SYMLINK libspdk_vhost.so 00:05:53.908 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:53.908 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:53.908 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:53.908 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:53.908 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:54.167 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:54.167 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:54.167 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:54.167 CC lib/ftl/base/ftl_base_dev.o 00:05:54.167 CC lib/ftl/base/ftl_base_bdev.o 00:05:54.167 CC lib/ftl/ftl_trace.o 00:05:54.167 LIB libspdk_nvmf.a 00:05:54.424 SO libspdk_nvmf.so.20.0 00:05:54.424 LIB libspdk_ftl.a 00:05:54.680 SYMLINK libspdk_nvmf.so 00:05:54.680 SO libspdk_ftl.so.9.0 00:05:55.243 SYMLINK libspdk_ftl.so 00:05:55.499 CC module/env_dpdk/env_dpdk_rpc.o 00:05:55.499 CC module/blob/bdev/blob_bdev.o 00:05:55.756 CC module/accel/error/accel_error.o 00:05:55.756 CC module/keyring/file/keyring.o 00:05:55.756 CC module/keyring/linux/keyring.o 00:05:55.756 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:55.756 CC module/accel/ioat/accel_ioat.o 00:05:55.756 CC module/sock/posix/posix.o 00:05:55.756 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:55.756 CC module/fsdev/aio/fsdev_aio.o 00:05:55.756 LIB libspdk_env_dpdk_rpc.a 00:05:55.756 SO libspdk_env_dpdk_rpc.so.6.0 00:05:55.756 SYMLINK libspdk_env_dpdk_rpc.so 00:05:55.756 CC module/keyring/linux/keyring_rpc.o 00:05:55.756 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:55.756 CC module/keyring/file/keyring_rpc.o 00:05:55.756 LIB libspdk_scheduler_dpdk_governor.a 00:05:55.756 CC module/accel/ioat/accel_ioat_rpc.o 00:05:55.756 CC module/accel/error/accel_error_rpc.o 00:05:55.756 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:56.014 LIB libspdk_scheduler_dynamic.a 00:05:56.014 SO libspdk_scheduler_dynamic.so.4.0 00:05:56.014 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:56.014 CC module/fsdev/aio/linux_aio_mgr.o 00:05:56.014 LIB libspdk_blob_bdev.a 00:05:56.014 LIB libspdk_keyring_linux.a 00:05:56.014 SYMLINK libspdk_scheduler_dynamic.so 00:05:56.014 SO libspdk_blob_bdev.so.12.0 00:05:56.014 SO libspdk_keyring_linux.so.1.0 00:05:56.014 LIB libspdk_keyring_file.a 00:05:56.014 LIB libspdk_accel_ioat.a 00:05:56.014 SO libspdk_keyring_file.so.2.0 00:05:56.014 SYMLINK libspdk_blob_bdev.so 00:05:56.014 SO libspdk_accel_ioat.so.6.0 00:05:56.014 LIB libspdk_accel_error.a 00:05:56.014 SYMLINK libspdk_keyring_linux.so 00:05:56.014 SO libspdk_accel_error.so.2.0 00:05:56.272 SYMLINK libspdk_accel_ioat.so 00:05:56.272 SYMLINK libspdk_keyring_file.so 00:05:56.272 CC module/accel/dsa/accel_dsa.o 00:05:56.272 CC module/scheduler/gscheduler/gscheduler.o 00:05:56.272 SYMLINK libspdk_accel_error.so 00:05:56.272 CC module/accel/dsa/accel_dsa_rpc.o 00:05:56.272 CC module/accel/iaa/accel_iaa.o 00:05:56.272 CC module/bdev/delay/vbdev_delay.o 00:05:56.530 LIB libspdk_scheduler_gscheduler.a 00:05:56.530 CC module/blobfs/bdev/blobfs_bdev.o 00:05:56.530 CC module/bdev/gpt/gpt.o 00:05:56.530 CC module/bdev/error/vbdev_error.o 00:05:56.530 SO libspdk_scheduler_gscheduler.so.4.0 00:05:56.530 SYMLINK libspdk_scheduler_gscheduler.so 00:05:56.530 CC module/bdev/error/vbdev_error_rpc.o 00:05:56.530 LIB libspdk_accel_dsa.a 00:05:56.530 CC module/bdev/lvol/vbdev_lvol.o 00:05:56.530 SO libspdk_accel_dsa.so.5.0 00:05:56.530 CC module/bdev/gpt/vbdev_gpt.o 00:05:56.530 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:56.788 CC module/accel/iaa/accel_iaa_rpc.o 00:05:56.788 LIB libspdk_sock_posix.a 00:05:56.788 LIB libspdk_fsdev_aio.a 00:05:56.788 SO libspdk_sock_posix.so.6.0 00:05:56.788 SYMLINK libspdk_accel_dsa.so 00:05:56.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:56.788 SO libspdk_fsdev_aio.so.1.0 00:05:56.788 SYMLINK libspdk_sock_posix.so 00:05:56.788 LIB libspdk_accel_iaa.a 00:05:56.788 LIB libspdk_bdev_error.a 00:05:56.788 SO libspdk_accel_iaa.so.3.0 00:05:56.788 SYMLINK libspdk_fsdev_aio.so 00:05:56.788 LIB libspdk_blobfs_bdev.a 00:05:56.788 SO libspdk_bdev_error.so.6.0 00:05:56.788 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:57.046 SO libspdk_blobfs_bdev.so.6.0 00:05:57.046 SYMLINK libspdk_bdev_error.so 00:05:57.046 SYMLINK libspdk_accel_iaa.so 00:05:57.046 LIB libspdk_bdev_delay.a 00:05:57.046 LIB libspdk_bdev_gpt.a 00:05:57.046 CC module/bdev/malloc/bdev_malloc.o 00:05:57.046 SYMLINK libspdk_blobfs_bdev.so 00:05:57.046 SO libspdk_bdev_delay.so.6.0 00:05:57.046 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:57.046 CC module/bdev/null/bdev_null.o 00:05:57.046 SO libspdk_bdev_gpt.so.6.0 00:05:57.046 CC module/bdev/nvme/bdev_nvme.o 00:05:57.046 SYMLINK libspdk_bdev_gpt.so 00:05:57.046 SYMLINK libspdk_bdev_delay.so 00:05:57.046 CC module/bdev/passthru/vbdev_passthru.o 00:05:57.304 CC module/bdev/raid/bdev_raid.o 00:05:57.304 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:57.304 CC module/bdev/nvme/nvme_rpc.o 00:05:57.304 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:57.304 CC module/bdev/split/vbdev_split.o 00:05:57.304 CC module/bdev/null/bdev_null_rpc.o 00:05:57.304 LIB libspdk_bdev_lvol.a 00:05:57.563 SO libspdk_bdev_lvol.so.6.0 00:05:57.563 LIB libspdk_bdev_malloc.a 00:05:57.563 SYMLINK libspdk_bdev_lvol.so 00:05:57.563 CC module/bdev/split/vbdev_split_rpc.o 00:05:57.563 SO libspdk_bdev_malloc.so.6.0 00:05:57.563 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:57.563 CC module/bdev/nvme/bdev_mdns_client.o 00:05:57.563 LIB libspdk_bdev_null.a 00:05:57.563 CC module/bdev/nvme/vbdev_opal.o 00:05:57.563 SO libspdk_bdev_null.so.6.0 00:05:57.822 SYMLINK libspdk_bdev_malloc.so 00:05:57.822 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:57.822 SYMLINK libspdk_bdev_null.so 00:05:57.822 CC module/bdev/raid/bdev_raid_rpc.o 00:05:57.822 LIB libspdk_bdev_split.a 00:05:57.822 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:57.822 LIB libspdk_bdev_passthru.a 00:05:57.822 SO libspdk_bdev_split.so.6.0 00:05:57.822 SO libspdk_bdev_passthru.so.6.0 00:05:57.822 SYMLINK libspdk_bdev_split.so 00:05:57.822 LIB libspdk_bdev_zone_block.a 00:05:58.081 CC module/bdev/raid/bdev_raid_sb.o 00:05:58.081 SYMLINK libspdk_bdev_passthru.so 00:05:58.081 CC module/bdev/raid/raid0.o 00:05:58.081 SO libspdk_bdev_zone_block.so.6.0 00:05:58.081 CC module/bdev/xnvme/bdev_xnvme.o 00:05:58.081 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:58.081 CC module/bdev/raid/raid1.o 00:05:58.081 SYMLINK libspdk_bdev_zone_block.so 00:05:58.081 CC module/bdev/raid/concat.o 00:05:58.081 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:58.339 CC module/bdev/aio/bdev_aio.o 00:05:58.339 CC module/bdev/aio/bdev_aio_rpc.o 00:05:58.339 LIB libspdk_bdev_xnvme.a 00:05:58.339 SO libspdk_bdev_xnvme.so.3.0 00:05:58.339 LIB libspdk_bdev_raid.a 00:05:58.339 CC module/bdev/ftl/bdev_ftl.o 00:05:58.339 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:58.597 CC module/bdev/iscsi/bdev_iscsi.o 00:05:58.597 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:58.597 SYMLINK libspdk_bdev_xnvme.so 00:05:58.597 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:58.597 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:58.597 SO libspdk_bdev_raid.so.6.0 00:05:58.597 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:58.597 SYMLINK libspdk_bdev_raid.so 00:05:58.597 LIB libspdk_bdev_aio.a 00:05:58.923 SO libspdk_bdev_aio.so.6.0 00:05:58.923 SYMLINK libspdk_bdev_aio.so 00:05:58.923 LIB libspdk_bdev_ftl.a 00:05:58.923 SO libspdk_bdev_ftl.so.6.0 00:05:58.924 LIB libspdk_bdev_iscsi.a 00:05:58.924 SO libspdk_bdev_iscsi.so.6.0 00:05:59.222 SYMLINK libspdk_bdev_ftl.so 00:05:59.222 SYMLINK libspdk_bdev_iscsi.so 00:05:59.222 LIB libspdk_bdev_virtio.a 00:05:59.222 SO libspdk_bdev_virtio.so.6.0 00:05:59.222 SYMLINK libspdk_bdev_virtio.so 00:06:00.599 LIB libspdk_bdev_nvme.a 00:06:00.599 SO libspdk_bdev_nvme.so.7.1 00:06:00.599 SYMLINK libspdk_bdev_nvme.so 00:06:01.168 CC module/event/subsystems/scheduler/scheduler.o 00:06:01.169 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:01.169 CC module/event/subsystems/sock/sock.o 00:06:01.169 CC module/event/subsystems/vmd/vmd.o 00:06:01.169 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:01.169 CC module/event/subsystems/fsdev/fsdev.o 00:06:01.169 CC module/event/subsystems/keyring/keyring.o 00:06:01.169 CC module/event/subsystems/iobuf/iobuf.o 00:06:01.169 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:01.428 LIB libspdk_event_fsdev.a 00:06:01.428 LIB libspdk_event_vhost_blk.a 00:06:01.428 LIB libspdk_event_keyring.a 00:06:01.428 SO libspdk_event_fsdev.so.1.0 00:06:01.428 LIB libspdk_event_vmd.a 00:06:01.428 LIB libspdk_event_scheduler.a 00:06:01.428 LIB libspdk_event_sock.a 00:06:01.428 SO libspdk_event_vhost_blk.so.3.0 00:06:01.428 SO libspdk_event_keyring.so.1.0 00:06:01.428 LIB libspdk_event_iobuf.a 00:06:01.428 SO libspdk_event_scheduler.so.4.0 00:06:01.428 SO libspdk_event_sock.so.5.0 00:06:01.428 SO libspdk_event_vmd.so.6.0 00:06:01.428 SO libspdk_event_iobuf.so.3.0 00:06:01.428 SYMLINK libspdk_event_fsdev.so 00:06:01.428 SYMLINK libspdk_event_keyring.so 00:06:01.428 SYMLINK libspdk_event_sock.so 00:06:01.428 SYMLINK libspdk_event_scheduler.so 00:06:01.428 SYMLINK libspdk_event_vhost_blk.so 00:06:01.428 SYMLINK libspdk_event_vmd.so 00:06:01.428 SYMLINK libspdk_event_iobuf.so 00:06:01.995 CC module/event/subsystems/accel/accel.o 00:06:01.995 LIB libspdk_event_accel.a 00:06:01.995 SO libspdk_event_accel.so.6.0 00:06:02.253 SYMLINK libspdk_event_accel.so 00:06:02.513 CC module/event/subsystems/bdev/bdev.o 00:06:02.771 LIB libspdk_event_bdev.a 00:06:02.771 SO libspdk_event_bdev.so.6.0 00:06:03.030 SYMLINK libspdk_event_bdev.so 00:06:03.030 CC module/event/subsystems/scsi/scsi.o 00:06:03.030 CC module/event/subsystems/ublk/ublk.o 00:06:03.030 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:03.030 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:03.290 CC module/event/subsystems/nbd/nbd.o 00:06:03.290 LIB libspdk_event_ublk.a 00:06:03.290 SO libspdk_event_ublk.so.3.0 00:06:03.290 LIB libspdk_event_scsi.a 00:06:03.290 LIB libspdk_event_nbd.a 00:06:03.290 SYMLINK libspdk_event_ublk.so 00:06:03.550 SO libspdk_event_scsi.so.6.0 00:06:03.550 SO libspdk_event_nbd.so.6.0 00:06:03.550 SYMLINK libspdk_event_nbd.so 00:06:03.550 SYMLINK libspdk_event_scsi.so 00:06:03.550 LIB libspdk_event_nvmf.a 00:06:03.550 SO libspdk_event_nvmf.so.6.0 00:06:03.550 SYMLINK libspdk_event_nvmf.so 00:06:03.843 CC module/event/subsystems/iscsi/iscsi.o 00:06:03.843 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:03.843 LIB libspdk_event_vhost_scsi.a 00:06:04.102 SO libspdk_event_vhost_scsi.so.3.0 00:06:04.102 LIB libspdk_event_iscsi.a 00:06:04.102 SO libspdk_event_iscsi.so.6.0 00:06:04.102 SYMLINK libspdk_event_vhost_scsi.so 00:06:04.102 SYMLINK libspdk_event_iscsi.so 00:06:04.360 SO libspdk.so.6.0 00:06:04.360 SYMLINK libspdk.so 00:06:04.618 CC app/trace_record/trace_record.o 00:06:04.618 CXX app/trace/trace.o 00:06:04.618 CC app/spdk_lspci/spdk_lspci.o 00:06:04.618 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:04.618 CC app/nvmf_tgt/nvmf_main.o 00:06:04.618 CC app/iscsi_tgt/iscsi_tgt.o 00:06:04.618 CC app/spdk_tgt/spdk_tgt.o 00:06:04.618 CC examples/util/zipf/zipf.o 00:06:04.618 CC test/thread/poller_perf/poller_perf.o 00:06:04.618 CC examples/ioat/perf/perf.o 00:06:04.876 LINK spdk_lspci 00:06:04.877 LINK poller_perf 00:06:04.877 LINK interrupt_tgt 00:06:04.877 LINK nvmf_tgt 00:06:04.877 LINK zipf 00:06:04.877 LINK iscsi_tgt 00:06:04.877 LINK spdk_tgt 00:06:04.877 LINK spdk_trace_record 00:06:05.135 LINK ioat_perf 00:06:05.135 LINK spdk_trace 00:06:05.393 CC app/spdk_nvme_perf/perf.o 00:06:05.393 CC app/spdk_nvme_identify/identify.o 00:06:05.393 CC app/spdk_nvme_discover/discovery_aer.o 00:06:05.393 CC app/spdk_top/spdk_top.o 00:06:05.393 CC app/spdk_dd/spdk_dd.o 00:06:05.393 CC examples/ioat/verify/verify.o 00:06:05.393 CC test/dma/test_dma/test_dma.o 00:06:05.393 CC app/fio/nvme/fio_plugin.o 00:06:05.652 CC test/app/bdev_svc/bdev_svc.o 00:06:05.652 LINK spdk_nvme_discover 00:06:05.652 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:05.652 LINK verify 00:06:05.652 LINK bdev_svc 00:06:05.909 LINK spdk_dd 00:06:05.909 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:05.909 LINK test_dma 00:06:06.165 CC examples/thread/thread/thread_ex.o 00:06:06.166 LINK nvme_fuzz 00:06:06.166 LINK spdk_nvme 00:06:06.166 CC examples/sock/hello_world/hello_sock.o 00:06:06.422 CC examples/vmd/lsvmd/lsvmd.o 00:06:06.422 LINK thread 00:06:06.422 CC app/fio/bdev/fio_plugin.o 00:06:06.422 LINK spdk_nvme_perf 00:06:06.422 CC examples/idxd/perf/perf.o 00:06:06.422 LINK spdk_nvme_identify 00:06:06.423 LINK lsvmd 00:06:06.423 LINK spdk_top 00:06:06.423 LINK hello_sock 00:06:06.680 CC app/vhost/vhost.o 00:06:06.680 LINK vhost 00:06:06.680 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:06.937 TEST_HEADER include/spdk/accel.h 00:06:06.937 TEST_HEADER include/spdk/accel_module.h 00:06:06.937 TEST_HEADER include/spdk/assert.h 00:06:06.937 TEST_HEADER include/spdk/barrier.h 00:06:06.937 TEST_HEADER include/spdk/base64.h 00:06:06.937 TEST_HEADER include/spdk/bdev.h 00:06:06.937 TEST_HEADER include/spdk/bdev_module.h 00:06:06.937 TEST_HEADER include/spdk/bdev_zone.h 00:06:06.937 CC examples/accel/perf/accel_perf.o 00:06:06.937 CC examples/vmd/led/led.o 00:06:06.937 TEST_HEADER include/spdk/bit_array.h 00:06:06.937 TEST_HEADER include/spdk/bit_pool.h 00:06:06.937 TEST_HEADER include/spdk/blob_bdev.h 00:06:06.937 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:06.937 TEST_HEADER include/spdk/blobfs.h 00:06:06.937 TEST_HEADER include/spdk/blob.h 00:06:06.937 TEST_HEADER include/spdk/conf.h 00:06:06.937 LINK idxd_perf 00:06:06.937 TEST_HEADER include/spdk/config.h 00:06:06.937 TEST_HEADER include/spdk/cpuset.h 00:06:06.937 TEST_HEADER include/spdk/crc16.h 00:06:06.937 TEST_HEADER include/spdk/crc32.h 00:06:06.937 TEST_HEADER include/spdk/crc64.h 00:06:06.937 TEST_HEADER include/spdk/dif.h 00:06:06.937 TEST_HEADER include/spdk/dma.h 00:06:06.937 TEST_HEADER include/spdk/endian.h 00:06:06.937 TEST_HEADER include/spdk/env_dpdk.h 00:06:06.937 TEST_HEADER include/spdk/env.h 00:06:06.937 TEST_HEADER include/spdk/event.h 00:06:06.937 TEST_HEADER include/spdk/fd_group.h 00:06:06.937 TEST_HEADER include/spdk/fd.h 00:06:06.937 TEST_HEADER include/spdk/file.h 00:06:06.937 TEST_HEADER include/spdk/fsdev.h 00:06:06.937 TEST_HEADER include/spdk/fsdev_module.h 00:06:06.937 TEST_HEADER include/spdk/ftl.h 00:06:06.937 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:06.937 TEST_HEADER include/spdk/gpt_spec.h 00:06:06.937 TEST_HEADER include/spdk/hexlify.h 00:06:06.937 TEST_HEADER include/spdk/histogram_data.h 00:06:06.937 TEST_HEADER include/spdk/idxd.h 00:06:06.937 TEST_HEADER include/spdk/idxd_spec.h 00:06:06.937 TEST_HEADER include/spdk/init.h 00:06:06.937 TEST_HEADER include/spdk/ioat.h 00:06:06.938 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:06.938 TEST_HEADER include/spdk/ioat_spec.h 00:06:06.938 TEST_HEADER include/spdk/iscsi_spec.h 00:06:06.938 TEST_HEADER include/spdk/json.h 00:06:06.938 TEST_HEADER include/spdk/jsonrpc.h 00:06:06.938 TEST_HEADER include/spdk/keyring.h 00:06:06.938 TEST_HEADER include/spdk/keyring_module.h 00:06:06.938 TEST_HEADER include/spdk/likely.h 00:06:06.938 TEST_HEADER include/spdk/log.h 00:06:06.938 TEST_HEADER include/spdk/lvol.h 00:06:06.938 TEST_HEADER include/spdk/md5.h 00:06:06.938 TEST_HEADER include/spdk/memory.h 00:06:06.938 TEST_HEADER include/spdk/mmio.h 00:06:06.938 TEST_HEADER include/spdk/nbd.h 00:06:06.938 TEST_HEADER include/spdk/net.h 00:06:06.938 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:06.938 TEST_HEADER include/spdk/notify.h 00:06:06.938 TEST_HEADER include/spdk/nvme.h 00:06:06.938 TEST_HEADER include/spdk/nvme_intel.h 00:06:06.938 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:06.938 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:06.938 TEST_HEADER include/spdk/nvme_spec.h 00:06:06.938 TEST_HEADER include/spdk/nvme_zns.h 00:06:06.938 CC examples/blob/hello_world/hello_blob.o 00:06:06.938 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:06.938 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:06.938 TEST_HEADER include/spdk/nvmf.h 00:06:06.938 TEST_HEADER include/spdk/nvmf_spec.h 00:06:06.938 TEST_HEADER include/spdk/nvmf_transport.h 00:06:06.938 TEST_HEADER include/spdk/opal.h 00:06:06.938 TEST_HEADER include/spdk/opal_spec.h 00:06:06.938 TEST_HEADER include/spdk/pci_ids.h 00:06:06.938 TEST_HEADER include/spdk/pipe.h 00:06:06.938 TEST_HEADER include/spdk/queue.h 00:06:07.195 TEST_HEADER include/spdk/reduce.h 00:06:07.195 TEST_HEADER include/spdk/rpc.h 00:06:07.195 TEST_HEADER include/spdk/scheduler.h 00:06:07.195 TEST_HEADER include/spdk/scsi.h 00:06:07.195 LINK led 00:06:07.195 TEST_HEADER include/spdk/scsi_spec.h 00:06:07.195 TEST_HEADER include/spdk/sock.h 00:06:07.195 LINK spdk_bdev 00:06:07.195 TEST_HEADER include/spdk/stdinc.h 00:06:07.195 TEST_HEADER include/spdk/string.h 00:06:07.195 TEST_HEADER include/spdk/thread.h 00:06:07.196 TEST_HEADER include/spdk/trace.h 00:06:07.196 TEST_HEADER include/spdk/trace_parser.h 00:06:07.196 TEST_HEADER include/spdk/tree.h 00:06:07.196 TEST_HEADER include/spdk/ublk.h 00:06:07.196 TEST_HEADER include/spdk/util.h 00:06:07.196 TEST_HEADER include/spdk/uuid.h 00:06:07.196 TEST_HEADER include/spdk/version.h 00:06:07.196 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:07.196 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:07.196 TEST_HEADER include/spdk/vhost.h 00:06:07.196 TEST_HEADER include/spdk/vmd.h 00:06:07.196 TEST_HEADER include/spdk/xor.h 00:06:07.196 TEST_HEADER include/spdk/zipf.h 00:06:07.196 CXX test/cpp_headers/accel.o 00:06:07.196 CC examples/blob/cli/blobcli.o 00:06:07.196 CC examples/nvme/hello_world/hello_world.o 00:06:07.196 LINK hello_blob 00:06:07.196 CXX test/cpp_headers/accel_module.o 00:06:07.454 CC examples/nvme/reconnect/reconnect.o 00:06:07.454 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:07.454 LINK vhost_fuzz 00:06:07.454 LINK hello_fsdev 00:06:07.454 CXX test/cpp_headers/assert.o 00:06:07.454 LINK accel_perf 00:06:07.713 LINK hello_world 00:06:07.713 CC examples/nvme/arbitration/arbitration.o 00:06:07.713 CXX test/cpp_headers/barrier.o 00:06:07.713 LINK blobcli 00:06:07.713 CC examples/nvme/hotplug/hotplug.o 00:06:07.972 LINK reconnect 00:06:07.972 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:07.972 CC examples/nvme/abort/abort.o 00:06:07.972 CXX test/cpp_headers/base64.o 00:06:07.972 CC test/app/histogram_perf/histogram_perf.o 00:06:07.972 LINK cmb_copy 00:06:07.972 LINK hotplug 00:06:07.972 LINK nvme_manage 00:06:08.238 CXX test/cpp_headers/bdev.o 00:06:08.238 LINK iscsi_fuzz 00:06:08.238 CC test/app/jsoncat/jsoncat.o 00:06:08.238 CC test/app/stub/stub.o 00:06:08.238 LINK histogram_perf 00:06:08.238 LINK arbitration 00:06:08.238 LINK jsoncat 00:06:08.238 CXX test/cpp_headers/bdev_module.o 00:06:08.509 LINK abort 00:06:08.509 LINK stub 00:06:08.509 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:08.509 CXX test/cpp_headers/bdev_zone.o 00:06:08.509 CC test/event/event_perf/event_perf.o 00:06:08.509 CC test/rpc_client/rpc_client_test.o 00:06:08.766 CC examples/bdev/hello_world/hello_bdev.o 00:06:08.766 CC test/nvme/aer/aer.o 00:06:08.766 CC test/env/mem_callbacks/mem_callbacks.o 00:06:08.766 LINK pmr_persistence 00:06:08.766 CC test/nvme/reset/reset.o 00:06:08.766 CC test/nvme/sgl/sgl.o 00:06:08.766 CC test/nvme/e2edp/nvme_dp.o 00:06:08.766 LINK event_perf 00:06:08.766 LINK rpc_client_test 00:06:09.024 LINK hello_bdev 00:06:09.024 CXX test/cpp_headers/bit_array.o 00:06:09.024 LINK aer 00:06:09.024 LINK reset 00:06:09.024 CC test/nvme/overhead/overhead.o 00:06:09.024 LINK sgl 00:06:09.024 CC test/event/reactor/reactor.o 00:06:09.024 LINK nvme_dp 00:06:09.281 CXX test/cpp_headers/bit_pool.o 00:06:09.281 CC test/event/reactor_perf/reactor_perf.o 00:06:09.281 CXX test/cpp_headers/blob_bdev.o 00:06:09.281 LINK reactor 00:06:09.281 LINK mem_callbacks 00:06:09.281 CC examples/bdev/bdevperf/bdevperf.o 00:06:09.281 LINK reactor_perf 00:06:09.281 CXX test/cpp_headers/blobfs_bdev.o 00:06:09.281 CXX test/cpp_headers/blobfs.o 00:06:09.281 CXX test/cpp_headers/blob.o 00:06:09.540 LINK overhead 00:06:09.540 CC test/nvme/err_injection/err_injection.o 00:06:09.540 CC test/env/vtophys/vtophys.o 00:06:09.540 CC test/nvme/startup/startup.o 00:06:09.540 CC test/nvme/reserve/reserve.o 00:06:09.540 CXX test/cpp_headers/conf.o 00:06:09.798 CC test/nvme/simple_copy/simple_copy.o 00:06:09.798 CC test/nvme/connect_stress/connect_stress.o 00:06:09.798 CC test/event/app_repeat/app_repeat.o 00:06:09.798 LINK err_injection 00:06:09.798 LINK vtophys 00:06:09.798 CC test/nvme/boot_partition/boot_partition.o 00:06:09.798 CXX test/cpp_headers/config.o 00:06:09.798 CXX test/cpp_headers/cpuset.o 00:06:09.798 LINK reserve 00:06:09.798 LINK app_repeat 00:06:10.057 LINK startup 00:06:10.057 LINK connect_stress 00:06:10.057 CXX test/cpp_headers/crc16.o 00:06:10.057 LINK simple_copy 00:06:10.057 LINK boot_partition 00:06:10.057 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:10.057 CXX test/cpp_headers/crc32.o 00:06:10.315 CXX test/cpp_headers/crc64.o 00:06:10.315 CC test/nvme/compliance/nvme_compliance.o 00:06:10.315 CXX test/cpp_headers/dif.o 00:06:10.315 CC test/event/scheduler/scheduler.o 00:06:10.315 CC test/nvme/fused_ordering/fused_ordering.o 00:06:10.315 LINK env_dpdk_post_init 00:06:10.316 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:10.316 CC test/env/pci/pci_ut.o 00:06:10.316 CC test/env/memory/memory_ut.o 00:06:10.574 CXX test/cpp_headers/dma.o 00:06:10.574 LINK bdevperf 00:06:10.574 CC test/nvme/fdp/fdp.o 00:06:10.574 LINK fused_ordering 00:06:10.574 LINK scheduler 00:06:10.574 LINK doorbell_aers 00:06:10.574 LINK nvme_compliance 00:06:10.574 CXX test/cpp_headers/endian.o 00:06:10.574 CC test/nvme/cuse/cuse.o 00:06:10.832 CXX test/cpp_headers/env_dpdk.o 00:06:10.832 CXX test/cpp_headers/env.o 00:06:10.832 LINK fdp 00:06:11.091 LINK pci_ut 00:06:11.091 CXX test/cpp_headers/event.o 00:06:11.091 CC examples/nvmf/nvmf/nvmf.o 00:06:11.091 CXX test/cpp_headers/fd_group.o 00:06:11.091 CC test/accel/dif/dif.o 00:06:11.091 CC test/blobfs/mkfs/mkfs.o 00:06:11.091 CXX test/cpp_headers/fd.o 00:06:11.091 CXX test/cpp_headers/file.o 00:06:11.091 CC test/lvol/esnap/esnap.o 00:06:11.348 CXX test/cpp_headers/fsdev.o 00:06:11.349 CXX test/cpp_headers/fsdev_module.o 00:06:11.349 LINK mkfs 00:06:11.349 CXX test/cpp_headers/ftl.o 00:06:11.349 CXX test/cpp_headers/fuse_dispatcher.o 00:06:11.349 LINK nvmf 00:06:11.605 CXX test/cpp_headers/gpt_spec.o 00:06:11.605 CXX test/cpp_headers/hexlify.o 00:06:11.605 CXX test/cpp_headers/histogram_data.o 00:06:11.605 CXX test/cpp_headers/idxd.o 00:06:11.605 CXX test/cpp_headers/idxd_spec.o 00:06:11.605 CXX test/cpp_headers/init.o 00:06:11.921 CXX test/cpp_headers/ioat.o 00:06:11.921 CXX test/cpp_headers/ioat_spec.o 00:06:11.921 CXX test/cpp_headers/iscsi_spec.o 00:06:11.921 LINK memory_ut 00:06:11.921 CXX test/cpp_headers/json.o 00:06:11.921 CXX test/cpp_headers/jsonrpc.o 00:06:11.921 CXX test/cpp_headers/keyring.o 00:06:11.921 LINK dif 00:06:11.921 CXX test/cpp_headers/keyring_module.o 00:06:11.921 CXX test/cpp_headers/likely.o 00:06:11.921 CXX test/cpp_headers/log.o 00:06:11.921 CXX test/cpp_headers/lvol.o 00:06:12.179 CXX test/cpp_headers/md5.o 00:06:12.179 CXX test/cpp_headers/memory.o 00:06:12.179 CXX test/cpp_headers/mmio.o 00:06:12.179 CXX test/cpp_headers/nbd.o 00:06:12.179 CXX test/cpp_headers/net.o 00:06:12.179 CXX test/cpp_headers/notify.o 00:06:12.179 CXX test/cpp_headers/nvme.o 00:06:12.179 CXX test/cpp_headers/nvme_intel.o 00:06:12.179 CXX test/cpp_headers/nvme_ocssd.o 00:06:12.179 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:12.436 CXX test/cpp_headers/nvme_spec.o 00:06:12.436 CXX test/cpp_headers/nvme_zns.o 00:06:12.436 CXX test/cpp_headers/nvmf_cmd.o 00:06:12.436 CC test/bdev/bdevio/bdevio.o 00:06:12.436 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:12.436 CXX test/cpp_headers/nvmf.o 00:06:12.436 CXX test/cpp_headers/nvmf_spec.o 00:06:12.436 CXX test/cpp_headers/nvmf_transport.o 00:06:12.436 CXX test/cpp_headers/opal.o 00:06:12.436 CXX test/cpp_headers/opal_spec.o 00:06:12.694 CXX test/cpp_headers/pci_ids.o 00:06:12.694 LINK cuse 00:06:12.694 CXX test/cpp_headers/pipe.o 00:06:12.694 CXX test/cpp_headers/queue.o 00:06:12.694 CXX test/cpp_headers/reduce.o 00:06:12.694 CXX test/cpp_headers/rpc.o 00:06:12.694 CXX test/cpp_headers/scheduler.o 00:06:12.694 CXX test/cpp_headers/scsi.o 00:06:12.694 CXX test/cpp_headers/scsi_spec.o 00:06:12.694 CXX test/cpp_headers/sock.o 00:06:12.952 CXX test/cpp_headers/stdinc.o 00:06:12.952 LINK bdevio 00:06:12.952 CXX test/cpp_headers/string.o 00:06:12.952 CXX test/cpp_headers/thread.o 00:06:12.952 CXX test/cpp_headers/trace.o 00:06:12.952 CXX test/cpp_headers/trace_parser.o 00:06:12.952 CXX test/cpp_headers/tree.o 00:06:12.952 CXX test/cpp_headers/ublk.o 00:06:12.952 CXX test/cpp_headers/util.o 00:06:12.952 CXX test/cpp_headers/uuid.o 00:06:12.952 CXX test/cpp_headers/version.o 00:06:12.952 CXX test/cpp_headers/vfio_user_pci.o 00:06:12.953 CXX test/cpp_headers/vfio_user_spec.o 00:06:12.953 CXX test/cpp_headers/vhost.o 00:06:12.953 CXX test/cpp_headers/vmd.o 00:06:13.210 CXX test/cpp_headers/xor.o 00:06:13.210 CXX test/cpp_headers/zipf.o 00:06:18.501 LINK esnap 00:06:18.501 00:06:18.501 real 1m43.697s 00:06:18.501 user 9m2.767s 00:06:18.501 sys 2m18.307s 00:06:18.502 ************************************ 00:06:18.502 END TEST make 00:06:18.502 ************************************ 00:06:18.502 20:34:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:18.502 20:34:13 make -- common/autotest_common.sh@10 -- $ set +x 00:06:18.502 20:34:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:18.502 20:34:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:18.502 20:34:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:18.502 20:34:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.502 20:34:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:18.502 20:34:13 -- pm/common@44 -- $ pid=5350 00:06:18.502 20:34:13 -- pm/common@50 -- $ kill -TERM 5350 00:06:18.502 20:34:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.502 20:34:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:18.502 20:34:13 -- pm/common@44 -- $ pid=5352 00:06:18.502 20:34:13 -- pm/common@50 -- $ kill -TERM 5352 00:06:18.502 20:34:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:18.502 20:34:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:18.502 20:34:13 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.502 20:34:13 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.502 20:34:13 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.502 20:34:13 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.502 20:34:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.502 20:34:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.502 20:34:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.502 20:34:13 -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.502 20:34:13 -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.502 20:34:13 -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.502 20:34:13 -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.502 20:34:13 -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.502 20:34:13 -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.502 20:34:13 -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.502 20:34:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.502 20:34:13 -- scripts/common.sh@344 -- # case "$op" in 00:06:18.502 20:34:13 -- scripts/common.sh@345 -- # : 1 00:06:18.502 20:34:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.502 20:34:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.502 20:34:13 -- scripts/common.sh@365 -- # decimal 1 00:06:18.502 20:34:13 -- scripts/common.sh@353 -- # local d=1 00:06:18.502 20:34:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.502 20:34:13 -- scripts/common.sh@355 -- # echo 1 00:06:18.502 20:34:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.502 20:34:13 -- scripts/common.sh@366 -- # decimal 2 00:06:18.502 20:34:13 -- scripts/common.sh@353 -- # local d=2 00:06:18.502 20:34:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.502 20:34:13 -- scripts/common.sh@355 -- # echo 2 00:06:18.502 20:34:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.502 20:34:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.502 20:34:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.502 20:34:13 -- scripts/common.sh@368 -- # return 0 00:06:18.502 20:34:13 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.502 20:34:13 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.502 --rc genhtml_branch_coverage=1 00:06:18.502 --rc genhtml_function_coverage=1 00:06:18.502 --rc genhtml_legend=1 00:06:18.502 --rc geninfo_all_blocks=1 00:06:18.502 --rc geninfo_unexecuted_blocks=1 00:06:18.502 00:06:18.502 ' 00:06:18.502 20:34:13 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.502 --rc genhtml_branch_coverage=1 00:06:18.502 --rc genhtml_function_coverage=1 00:06:18.502 --rc genhtml_legend=1 00:06:18.502 --rc geninfo_all_blocks=1 00:06:18.502 --rc geninfo_unexecuted_blocks=1 00:06:18.502 00:06:18.502 ' 00:06:18.502 20:34:13 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.502 --rc genhtml_branch_coverage=1 00:06:18.502 --rc genhtml_function_coverage=1 00:06:18.502 --rc genhtml_legend=1 00:06:18.502 --rc geninfo_all_blocks=1 00:06:18.502 --rc geninfo_unexecuted_blocks=1 00:06:18.502 00:06:18.502 ' 00:06:18.502 20:34:13 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.502 --rc genhtml_branch_coverage=1 00:06:18.502 --rc genhtml_function_coverage=1 00:06:18.502 --rc genhtml_legend=1 00:06:18.502 --rc geninfo_all_blocks=1 00:06:18.502 --rc geninfo_unexecuted_blocks=1 00:06:18.502 00:06:18.502 ' 00:06:18.502 20:34:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.502 20:34:13 -- nvmf/common.sh@7 -- # uname -s 00:06:18.502 20:34:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.502 20:34:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.502 20:34:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.502 20:34:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.502 20:34:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.502 20:34:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.502 20:34:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.502 20:34:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.502 20:34:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.502 20:34:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.502 20:34:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b667ac01-5336-4eb3-bd57-57a0d0e36562 00:06:18.502 20:34:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=b667ac01-5336-4eb3-bd57-57a0d0e36562 00:06:18.502 20:34:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.502 20:34:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.502 20:34:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.502 20:34:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.502 20:34:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.502 20:34:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.502 20:34:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.502 20:34:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.502 20:34:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.502 20:34:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.502 20:34:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.502 20:34:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.502 20:34:13 -- paths/export.sh@5 -- # export PATH 00:06:18.503 20:34:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.503 20:34:13 -- nvmf/common.sh@51 -- # : 0 00:06:18.503 20:34:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.503 20:34:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.503 20:34:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.503 20:34:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.503 20:34:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.503 20:34:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.503 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.503 20:34:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.503 20:34:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.503 20:34:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.503 20:34:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:18.503 20:34:13 -- spdk/autotest.sh@32 -- # uname -s 00:06:18.503 20:34:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:18.503 20:34:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:18.503 20:34:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:18.503 20:34:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:18.503 20:34:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:18.503 20:34:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:18.503 20:34:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:18.503 20:34:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:18.503 20:34:13 -- spdk/autotest.sh@48 -- # udevadm_pid=55011 00:06:18.503 20:34:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:18.503 20:34:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:18.503 20:34:13 -- pm/common@17 -- # local monitor 00:06:18.503 20:34:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.503 20:34:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.503 20:34:13 -- pm/common@21 -- # date +%s 00:06:18.503 20:34:13 -- pm/common@25 -- # sleep 1 00:06:18.503 20:34:13 -- pm/common@21 -- # date +%s 00:06:18.503 20:34:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732653253 00:06:18.503 20:34:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732653253 00:06:18.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732653253_collect-vmstat.pm.log 00:06:18.762 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732653253_collect-cpu-load.pm.log 00:06:19.699 20:34:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:19.699 20:34:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:19.699 20:34:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.699 20:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:19.699 20:34:14 -- spdk/autotest.sh@59 -- # create_test_list 00:06:19.699 20:34:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:19.699 20:34:14 -- common/autotest_common.sh@10 -- # set +x 00:06:19.699 20:34:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:19.699 20:34:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:19.699 20:34:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:19.699 20:34:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:19.699 20:34:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:19.699 20:34:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:19.699 20:34:14 -- common/autotest_common.sh@1457 -- # uname 00:06:19.699 20:34:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:19.699 20:34:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:19.699 20:34:14 -- common/autotest_common.sh@1477 -- # uname 00:06:19.699 20:34:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:19.699 20:34:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:19.699 20:34:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:19.699 lcov: LCOV version 1.15 00:06:19.699 20:34:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:37.779 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:37.779 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:55.861 20:34:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:55.861 20:34:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.861 20:34:48 -- common/autotest_common.sh@10 -- # set +x 00:06:55.861 20:34:48 -- spdk/autotest.sh@78 -- # rm -f 00:06:55.861 20:34:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.861 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:55.861 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:55.861 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:55.861 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:55.861 20:34:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:55.861 20:34:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:55.861 20:34:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:55.861 20:34:49 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:55.861 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.861 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.861 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.861 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:55.861 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.861 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:55.861 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:55.861 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.861 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.861 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:55.861 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:55.861 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:55.862 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.862 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.862 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:55.862 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:55.862 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:55.862 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.862 20:34:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:55.862 20:34:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:55.862 20:34:49 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:55.862 20:34:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:55.862 20:34:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:55.862 20:34:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:55.862 20:34:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:55.862 20:34:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:55.862 20:34:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:55.862 20:34:49 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:49 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147622 s, 71.0 MB/s 00:06:55.862 20:34:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:55.862 20:34:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:55.862 20:34:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:55.862 20:34:49 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:49 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555557 s, 189 MB/s 00:06:55.862 20:34:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:55.862 20:34:49 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:55.862 20:34:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:50 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522047 s, 201 MB/s 00:06:55.862 20:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:55.862 20:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:55.862 20:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:50 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575071 s, 182 MB/s 00:06:55.862 20:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:55.862 20:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:55.862 20:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:50 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555131 s, 189 MB/s 00:06:55.862 20:34:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:55.862 20:34:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:55.862 20:34:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:55.862 20:34:50 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:55.862 20:34:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:55.862 No valid GPT data, bailing 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:55.862 20:34:50 -- scripts/common.sh@394 -- # pt= 00:06:55.862 20:34:50 -- scripts/common.sh@395 -- # return 1 00:06:55.862 20:34:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:55.862 1+0 records in 00:06:55.862 1+0 records out 00:06:55.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665433 s, 158 MB/s 00:06:55.862 20:34:50 -- spdk/autotest.sh@105 -- # sync 00:06:55.862 20:34:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:55.862 20:34:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:55.862 20:34:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:58.416 20:34:52 -- spdk/autotest.sh@111 -- # uname -s 00:06:58.416 20:34:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:58.416 20:34:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:58.416 20:34:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:58.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.983 Hugepages 00:06:58.983 node hugesize free / total 00:06:58.983 node0 1048576kB 0 / 0 00:06:59.351 node0 2048kB 0 / 0 00:06:59.351 00:06:59.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:59.351 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:59.351 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:59.351 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:59.351 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:59.610 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:59.610 20:34:54 -- spdk/autotest.sh@117 -- # uname -s 00:06:59.610 20:34:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:59.610 20:34:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:59.610 20:34:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:00.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.742 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:00.742 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:00.742 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.000 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:01.000 20:34:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:01.936 20:34:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:01.936 20:34:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:01.936 20:34:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:01.936 20:34:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:01.936 20:34:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:01.936 20:34:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:01.936 20:34:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.936 20:34:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.936 20:34:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:02.213 20:34:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:02.213 20:34:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:02.213 20:34:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.753 Waiting for block devices as requested 00:07:02.753 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.012 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.012 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.310 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:08.310 20:35:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:08.310 20:35:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1543 -- # continue 00:07:08.310 20:35:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:08.310 20:35:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1543 -- # continue 00:07:08.310 20:35:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:08.310 20:35:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1543 -- # continue 00:07:08.310 20:35:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:08.310 20:35:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:08.310 20:35:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:08.310 20:35:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:08.310 20:35:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:08.310 20:35:03 -- common/autotest_common.sh@1543 -- # continue 00:07:08.310 20:35:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:08.310 20:35:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:08.310 20:35:03 -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 20:35:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:08.569 20:35:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.569 20:35:03 -- common/autotest_common.sh@10 -- # set +x 00:07:08.569 20:35:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:09.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.704 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.963 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.963 20:35:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:09.963 20:35:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.963 20:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:09.963 20:35:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:09.963 20:35:04 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:09.963 20:35:04 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:09.963 20:35:04 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:09.963 20:35:04 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:09.963 20:35:04 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:09.963 20:35:04 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:09.963 20:35:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:09.963 20:35:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:09.963 20:35:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:09.963 20:35:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:09.963 20:35:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:09.963 20:35:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:09.963 20:35:04 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:09.963 20:35:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:09.963 20:35:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:09.963 20:35:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:09.963 20:35:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:09.963 20:35:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:09.963 20:35:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:09.963 20:35:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:09.963 20:35:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:09.963 20:35:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:09.963 20:35:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:10.222 20:35:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:10.222 20:35:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:10.222 20:35:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:10.222 20:35:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:10.222 20:35:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:10.222 20:35:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:10.222 20:35:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:10.222 20:35:04 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:10.222 20:35:04 -- common/autotest_common.sh@1572 -- # return 0 00:07:10.222 20:35:04 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:10.222 20:35:04 -- common/autotest_common.sh@1580 -- # return 0 00:07:10.222 20:35:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:10.222 20:35:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:10.222 20:35:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:10.222 20:35:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:10.222 20:35:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:10.222 20:35:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.222 20:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:10.222 20:35:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:10.222 20:35:04 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.222 20:35:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.222 20:35:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.222 20:35:04 -- common/autotest_common.sh@10 -- # set +x 00:07:10.222 ************************************ 00:07:10.222 START TEST env 00:07:10.222 ************************************ 00:07:10.222 20:35:04 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:10.222 * Looking for test storage... 00:07:10.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.222 20:35:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.222 20:35:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.222 20:35:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.222 20:35:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.222 20:35:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.222 20:35:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.222 20:35:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.222 20:35:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.222 20:35:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.222 20:35:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.222 20:35:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.222 20:35:05 env -- scripts/common.sh@344 -- # case "$op" in 00:07:10.222 20:35:05 env -- scripts/common.sh@345 -- # : 1 00:07:10.222 20:35:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.222 20:35:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.222 20:35:05 env -- scripts/common.sh@365 -- # decimal 1 00:07:10.222 20:35:05 env -- scripts/common.sh@353 -- # local d=1 00:07:10.222 20:35:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.222 20:35:05 env -- scripts/common.sh@355 -- # echo 1 00:07:10.222 20:35:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.222 20:35:05 env -- scripts/common.sh@366 -- # decimal 2 00:07:10.222 20:35:05 env -- scripts/common.sh@353 -- # local d=2 00:07:10.222 20:35:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.222 20:35:05 env -- scripts/common.sh@355 -- # echo 2 00:07:10.222 20:35:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.222 20:35:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.222 20:35:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.222 20:35:05 env -- scripts/common.sh@368 -- # return 0 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.222 --rc genhtml_branch_coverage=1 00:07:10.222 --rc genhtml_function_coverage=1 00:07:10.222 --rc genhtml_legend=1 00:07:10.222 --rc geninfo_all_blocks=1 00:07:10.222 --rc geninfo_unexecuted_blocks=1 00:07:10.222 00:07:10.222 ' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.222 --rc genhtml_branch_coverage=1 00:07:10.222 --rc genhtml_function_coverage=1 00:07:10.222 --rc genhtml_legend=1 00:07:10.222 --rc geninfo_all_blocks=1 00:07:10.222 --rc geninfo_unexecuted_blocks=1 00:07:10.222 00:07:10.222 ' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.222 --rc genhtml_branch_coverage=1 00:07:10.222 --rc genhtml_function_coverage=1 00:07:10.222 --rc genhtml_legend=1 00:07:10.222 --rc geninfo_all_blocks=1 00:07:10.222 --rc geninfo_unexecuted_blocks=1 00:07:10.222 00:07:10.222 ' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.222 --rc genhtml_branch_coverage=1 00:07:10.222 --rc genhtml_function_coverage=1 00:07:10.222 --rc genhtml_legend=1 00:07:10.222 --rc geninfo_all_blocks=1 00:07:10.222 --rc geninfo_unexecuted_blocks=1 00:07:10.222 00:07:10.222 ' 00:07:10.222 20:35:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.222 20:35:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.222 20:35:05 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.511 ************************************ 00:07:10.511 START TEST env_memory 00:07:10.511 ************************************ 00:07:10.511 20:35:05 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:10.511 00:07:10.511 00:07:10.511 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.511 http://cunit.sourceforge.net/ 00:07:10.511 00:07:10.511 00:07:10.511 Suite: memory 00:07:10.511 Test: alloc and free memory map ...[2024-11-26 20:35:05.308947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:10.511 passed 00:07:10.511 Test: mem map translation ...[2024-11-26 20:35:05.382065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:10.512 [2024-11-26 20:35:05.382178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:10.512 [2024-11-26 20:35:05.382294] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:10.512 [2024-11-26 20:35:05.382339] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:10.512 passed 00:07:10.512 Test: mem map registration ...[2024-11-26 20:35:05.495573] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:10.512 [2024-11-26 20:35:05.495703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:10.770 passed 00:07:10.770 Test: mem map adjacent registrations ...passed 00:07:10.771 00:07:10.771 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.771 suites 1 1 n/a 0 0 00:07:10.771 tests 4 4 4 0 0 00:07:10.771 asserts 152 152 152 0 n/a 00:07:10.771 00:07:10.771 Elapsed time = 0.360 seconds 00:07:10.771 00:07:10.771 real 0m0.414s 00:07:10.771 user 0m0.363s 00:07:10.771 sys 0m0.041s 00:07:10.771 20:35:05 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.771 20:35:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:10.771 ************************************ 00:07:10.771 END TEST env_memory 00:07:10.771 ************************************ 00:07:10.771 20:35:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.771 20:35:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.771 20:35:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.771 20:35:05 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.771 ************************************ 00:07:10.771 START TEST env_vtophys 00:07:10.771 ************************************ 00:07:10.771 20:35:05 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.771 EAL: lib.eal log level changed from notice to debug 00:07:10.771 EAL: Detected lcore 0 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 1 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 2 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 3 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 4 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 5 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 6 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 7 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 8 as core 0 on socket 0 00:07:10.771 EAL: Detected lcore 9 as core 0 on socket 0 00:07:10.771 EAL: Maximum logical cores by configuration: 128 00:07:10.771 EAL: Detected CPU lcores: 10 00:07:10.771 EAL: Detected NUMA nodes: 1 00:07:10.771 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:10.771 EAL: Detected shared linkage of DPDK 00:07:11.028 EAL: No shared files mode enabled, IPC will be disabled 00:07:11.028 EAL: Selected IOVA mode 'PA' 00:07:11.028 EAL: Probing VFIO support... 00:07:11.028 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:11.028 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:11.028 EAL: Ask a virtual area of 0x2e000 bytes 00:07:11.028 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:11.028 EAL: Setting up physically contiguous memory... 00:07:11.028 EAL: Setting maximum number of open files to 524288 00:07:11.028 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:11.028 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:11.028 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.028 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:11.028 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.028 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.028 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:11.028 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:11.028 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.028 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:11.028 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.028 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.028 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:11.028 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:11.028 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.028 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:11.028 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.028 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.028 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:11.028 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:11.028 EAL: Ask a virtual area of 0x61000 bytes 00:07:11.028 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:11.028 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:11.028 EAL: Ask a virtual area of 0x400000000 bytes 00:07:11.028 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:11.028 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:11.028 EAL: Hugepages will be freed exactly as allocated. 00:07:11.028 EAL: No shared files mode enabled, IPC is disabled 00:07:11.028 EAL: No shared files mode enabled, IPC is disabled 00:07:11.028 EAL: TSC frequency is ~2100000 KHz 00:07:11.028 EAL: Main lcore 0 is ready (tid=7f0a0dca3a40;cpuset=[0]) 00:07:11.028 EAL: Trying to obtain current memory policy. 00:07:11.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.028 EAL: Restoring previous memory policy: 0 00:07:11.028 EAL: request: mp_malloc_sync 00:07:11.028 EAL: No shared files mode enabled, IPC is disabled 00:07:11.028 EAL: Heap on socket 0 was expanded by 2MB 00:07:11.028 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:11.028 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:11.028 EAL: Mem event callback 'spdk:(nil)' registered 00:07:11.028 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:11.028 00:07:11.028 00:07:11.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.028 http://cunit.sourceforge.net/ 00:07:11.028 00:07:11.028 00:07:11.028 Suite: components_suite 00:07:11.963 Test: vtophys_malloc_test ...passed 00:07:11.963 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:11.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.963 EAL: Restoring previous memory policy: 4 00:07:11.963 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.963 EAL: request: mp_malloc_sync 00:07:11.963 EAL: No shared files mode enabled, IPC is disabled 00:07:11.963 EAL: Heap on socket 0 was expanded by 4MB 00:07:11.963 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.963 EAL: request: mp_malloc_sync 00:07:11.963 EAL: No shared files mode enabled, IPC is disabled 00:07:11.963 EAL: Heap on socket 0 was shrunk by 4MB 00:07:11.963 EAL: Trying to obtain current memory policy. 00:07:11.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.963 EAL: Restoring previous memory policy: 4 00:07:11.963 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.963 EAL: request: mp_malloc_sync 00:07:11.963 EAL: No shared files mode enabled, IPC is disabled 00:07:11.963 EAL: Heap on socket 0 was expanded by 6MB 00:07:11.963 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.963 EAL: request: mp_malloc_sync 00:07:11.963 EAL: No shared files mode enabled, IPC is disabled 00:07:11.963 EAL: Heap on socket 0 was shrunk by 6MB 00:07:11.964 EAL: Trying to obtain current memory policy. 00:07:11.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.964 EAL: Restoring previous memory policy: 4 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was expanded by 10MB 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was shrunk by 10MB 00:07:11.964 EAL: Trying to obtain current memory policy. 00:07:11.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.964 EAL: Restoring previous memory policy: 4 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was expanded by 18MB 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was shrunk by 18MB 00:07:11.964 EAL: Trying to obtain current memory policy. 00:07:11.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.964 EAL: Restoring previous memory policy: 4 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was expanded by 34MB 00:07:11.964 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.964 EAL: request: mp_malloc_sync 00:07:11.964 EAL: No shared files mode enabled, IPC is disabled 00:07:11.964 EAL: Heap on socket 0 was shrunk by 34MB 00:07:11.964 EAL: Trying to obtain current memory policy. 00:07:11.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.224 EAL: Restoring previous memory policy: 4 00:07:12.224 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.224 EAL: request: mp_malloc_sync 00:07:12.224 EAL: No shared files mode enabled, IPC is disabled 00:07:12.224 EAL: Heap on socket 0 was expanded by 66MB 00:07:12.224 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.224 EAL: request: mp_malloc_sync 00:07:12.224 EAL: No shared files mode enabled, IPC is disabled 00:07:12.224 EAL: Heap on socket 0 was shrunk by 66MB 00:07:12.483 EAL: Trying to obtain current memory policy. 00:07:12.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.483 EAL: Restoring previous memory policy: 4 00:07:12.483 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.483 EAL: request: mp_malloc_sync 00:07:12.483 EAL: No shared files mode enabled, IPC is disabled 00:07:12.483 EAL: Heap on socket 0 was expanded by 130MB 00:07:12.742 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.742 EAL: request: mp_malloc_sync 00:07:12.742 EAL: No shared files mode enabled, IPC is disabled 00:07:12.742 EAL: Heap on socket 0 was shrunk by 130MB 00:07:13.001 EAL: Trying to obtain current memory policy. 00:07:13.001 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:13.001 EAL: Restoring previous memory policy: 4 00:07:13.001 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.001 EAL: request: mp_malloc_sync 00:07:13.001 EAL: No shared files mode enabled, IPC is disabled 00:07:13.001 EAL: Heap on socket 0 was expanded by 258MB 00:07:13.635 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.635 EAL: request: mp_malloc_sync 00:07:13.635 EAL: No shared files mode enabled, IPC is disabled 00:07:13.635 EAL: Heap on socket 0 was shrunk by 258MB 00:07:14.201 EAL: Trying to obtain current memory policy. 00:07:14.201 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.458 EAL: Restoring previous memory policy: 4 00:07:14.458 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.458 EAL: request: mp_malloc_sync 00:07:14.458 EAL: No shared files mode enabled, IPC is disabled 00:07:14.458 EAL: Heap on socket 0 was expanded by 514MB 00:07:15.395 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.653 EAL: request: mp_malloc_sync 00:07:15.653 EAL: No shared files mode enabled, IPC is disabled 00:07:15.653 EAL: Heap on socket 0 was shrunk by 514MB 00:07:16.596 EAL: Trying to obtain current memory policy. 00:07:16.596 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:16.854 EAL: Restoring previous memory policy: 4 00:07:16.854 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.854 EAL: request: mp_malloc_sync 00:07:16.854 EAL: No shared files mode enabled, IPC is disabled 00:07:16.854 EAL: Heap on socket 0 was expanded by 1026MB 00:07:18.754 EAL: Calling mem event callback 'spdk:(nil)' 00:07:19.012 EAL: request: mp_malloc_sync 00:07:19.012 EAL: No shared files mode enabled, IPC is disabled 00:07:19.012 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:21.079 passed 00:07:21.079 00:07:21.079 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.079 suites 1 1 n/a 0 0 00:07:21.079 tests 2 2 2 0 0 00:07:21.079 asserts 5719 5719 5719 0 n/a 00:07:21.079 00:07:21.079 Elapsed time = 9.923 seconds 00:07:21.079 EAL: Calling mem event callback 'spdk:(nil)' 00:07:21.079 EAL: request: mp_malloc_sync 00:07:21.079 EAL: No shared files mode enabled, IPC is disabled 00:07:21.079 EAL: Heap on socket 0 was shrunk by 2MB 00:07:21.079 EAL: No shared files mode enabled, IPC is disabled 00:07:21.079 EAL: No shared files mode enabled, IPC is disabled 00:07:21.079 EAL: No shared files mode enabled, IPC is disabled 00:07:21.079 00:07:21.079 real 0m10.297s 00:07:21.079 user 0m8.870s 00:07:21.079 sys 0m1.244s 00:07:21.079 20:35:15 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.079 20:35:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:21.079 ************************************ 00:07:21.079 END TEST env_vtophys 00:07:21.079 ************************************ 00:07:21.337 20:35:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:21.337 20:35:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.337 20:35:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.337 20:35:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.337 ************************************ 00:07:21.337 START TEST env_pci 00:07:21.337 ************************************ 00:07:21.337 20:35:16 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:21.337 00:07:21.337 00:07:21.337 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.337 http://cunit.sourceforge.net/ 00:07:21.337 00:07:21.337 00:07:21.337 Suite: pci 00:07:21.337 Test: pci_hook ...[2024-11-26 20:35:16.099829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57889 has claimed it 00:07:21.337 passed 00:07:21.337 00:07:21.337 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.337 suites 1 1 n/a 0 0 00:07:21.337 tests 1 1 1 0 0 00:07:21.337 asserts 25 25 25 0 n/a 00:07:21.337 00:07:21.337 Elapsed time = 0.009 seconds 00:07:21.337 EAL: Cannot find device (10000:00:01.0) 00:07:21.338 EAL: Failed to attach device on primary process 00:07:21.338 00:07:21.338 real 0m0.098s 00:07:21.338 user 0m0.047s 00:07:21.338 sys 0m0.049s 00:07:21.338 20:35:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.338 20:35:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:21.338 ************************************ 00:07:21.338 END TEST env_pci 00:07:21.338 ************************************ 00:07:21.338 20:35:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:21.338 20:35:16 env -- env/env.sh@15 -- # uname 00:07:21.338 20:35:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:21.338 20:35:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:21.338 20:35:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:21.338 20:35:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:21.338 20:35:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.338 20:35:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.338 ************************************ 00:07:21.338 START TEST env_dpdk_post_init 00:07:21.338 ************************************ 00:07:21.338 20:35:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:21.338 EAL: Detected CPU lcores: 10 00:07:21.338 EAL: Detected NUMA nodes: 1 00:07:21.338 EAL: Detected shared linkage of DPDK 00:07:21.597 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:21.597 EAL: Selected IOVA mode 'PA' 00:07:21.597 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:21.597 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:21.597 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:21.597 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:21.597 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:21.855 Starting DPDK initialization... 00:07:21.855 Starting SPDK post initialization... 00:07:21.855 SPDK NVMe probe 00:07:21.855 Attaching to 0000:00:10.0 00:07:21.855 Attaching to 0000:00:11.0 00:07:21.855 Attaching to 0000:00:12.0 00:07:21.855 Attaching to 0000:00:13.0 00:07:21.855 Attached to 0000:00:10.0 00:07:21.855 Attached to 0000:00:11.0 00:07:21.855 Attached to 0000:00:13.0 00:07:21.855 Attached to 0000:00:12.0 00:07:21.855 Cleaning up... 00:07:21.855 00:07:21.855 real 0m0.393s 00:07:21.855 user 0m0.134s 00:07:21.855 sys 0m0.160s 00:07:21.855 20:35:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.855 20:35:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 ************************************ 00:07:21.855 END TEST env_dpdk_post_init 00:07:21.855 ************************************ 00:07:21.855 20:35:16 env -- env/env.sh@26 -- # uname 00:07:21.855 20:35:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:21.855 20:35:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:21.855 20:35:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.855 20:35:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.855 20:35:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 ************************************ 00:07:21.855 START TEST env_mem_callbacks 00:07:21.855 ************************************ 00:07:21.855 20:35:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:21.855 EAL: Detected CPU lcores: 10 00:07:21.855 EAL: Detected NUMA nodes: 1 00:07:21.855 EAL: Detected shared linkage of DPDK 00:07:21.855 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:21.855 EAL: Selected IOVA mode 'PA' 00:07:22.113 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:22.113 00:07:22.113 00:07:22.113 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.113 http://cunit.sourceforge.net/ 00:07:22.113 00:07:22.113 00:07:22.113 Suite: memory 00:07:22.113 Test: test ... 00:07:22.113 register 0x200000200000 2097152 00:07:22.113 malloc 3145728 00:07:22.113 register 0x200000400000 4194304 00:07:22.113 buf 0x2000004fffc0 len 3145728 PASSED 00:07:22.113 malloc 64 00:07:22.113 buf 0x2000004ffec0 len 64 PASSED 00:07:22.113 malloc 4194304 00:07:22.113 register 0x200000800000 6291456 00:07:22.113 buf 0x2000009fffc0 len 4194304 PASSED 00:07:22.113 free 0x2000004fffc0 3145728 00:07:22.113 free 0x2000004ffec0 64 00:07:22.113 unregister 0x200000400000 4194304 PASSED 00:07:22.113 free 0x2000009fffc0 4194304 00:07:22.113 unregister 0x200000800000 6291456 PASSED 00:07:22.113 malloc 8388608 00:07:22.113 register 0x200000400000 10485760 00:07:22.113 buf 0x2000005fffc0 len 8388608 PASSED 00:07:22.113 free 0x2000005fffc0 8388608 00:07:22.113 unregister 0x200000400000 10485760 PASSED 00:07:22.113 passed 00:07:22.113 00:07:22.113 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.113 suites 1 1 n/a 0 0 00:07:22.113 tests 1 1 1 0 0 00:07:22.113 asserts 15 15 15 0 n/a 00:07:22.113 00:07:22.113 Elapsed time = 0.115 seconds 00:07:22.113 00:07:22.113 real 0m0.364s 00:07:22.113 user 0m0.157s 00:07:22.113 sys 0m0.104s 00:07:22.113 20:35:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.113 20:35:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:22.113 ************************************ 00:07:22.113 END TEST env_mem_callbacks 00:07:22.113 ************************************ 00:07:22.113 00:07:22.113 real 0m12.105s 00:07:22.113 user 0m9.805s 00:07:22.113 sys 0m1.919s 00:07:22.113 20:35:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.113 20:35:17 env -- common/autotest_common.sh@10 -- # set +x 00:07:22.113 ************************************ 00:07:22.113 END TEST env 00:07:22.113 ************************************ 00:07:22.371 20:35:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:22.371 20:35:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.371 20:35:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.371 20:35:17 -- common/autotest_common.sh@10 -- # set +x 00:07:22.371 ************************************ 00:07:22.371 START TEST rpc 00:07:22.371 ************************************ 00:07:22.371 20:35:17 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:22.371 * Looking for test storage... 00:07:22.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:22.371 20:35:17 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.371 20:35:17 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.371 20:35:17 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.371 20:35:17 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.371 20:35:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.371 20:35:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.371 20:35:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.371 20:35:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.371 20:35:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.371 20:35:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.371 20:35:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.371 20:35:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.371 20:35:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.371 20:35:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.371 20:35:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.371 20:35:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:22.371 20:35:17 rpc -- scripts/common.sh@345 -- # : 1 00:07:22.371 20:35:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.371 20:35:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.372 20:35:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:22.372 20:35:17 rpc -- scripts/common.sh@353 -- # local d=1 00:07:22.372 20:35:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.372 20:35:17 rpc -- scripts/common.sh@355 -- # echo 1 00:07:22.372 20:35:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.372 20:35:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:22.372 20:35:17 rpc -- scripts/common.sh@353 -- # local d=2 00:07:22.372 20:35:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.372 20:35:17 rpc -- scripts/common.sh@355 -- # echo 2 00:07:22.372 20:35:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.372 20:35:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.372 20:35:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.372 20:35:17 rpc -- scripts/common.sh@368 -- # return 0 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.372 --rc genhtml_branch_coverage=1 00:07:22.372 --rc genhtml_function_coverage=1 00:07:22.372 --rc genhtml_legend=1 00:07:22.372 --rc geninfo_all_blocks=1 00:07:22.372 --rc geninfo_unexecuted_blocks=1 00:07:22.372 00:07:22.372 ' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.372 --rc genhtml_branch_coverage=1 00:07:22.372 --rc genhtml_function_coverage=1 00:07:22.372 --rc genhtml_legend=1 00:07:22.372 --rc geninfo_all_blocks=1 00:07:22.372 --rc geninfo_unexecuted_blocks=1 00:07:22.372 00:07:22.372 ' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.372 --rc genhtml_branch_coverage=1 00:07:22.372 --rc genhtml_function_coverage=1 00:07:22.372 --rc genhtml_legend=1 00:07:22.372 --rc geninfo_all_blocks=1 00:07:22.372 --rc geninfo_unexecuted_blocks=1 00:07:22.372 00:07:22.372 ' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.372 --rc genhtml_branch_coverage=1 00:07:22.372 --rc genhtml_function_coverage=1 00:07:22.372 --rc genhtml_legend=1 00:07:22.372 --rc geninfo_all_blocks=1 00:07:22.372 --rc geninfo_unexecuted_blocks=1 00:07:22.372 00:07:22.372 ' 00:07:22.372 20:35:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58022 00:07:22.372 20:35:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:22.372 20:35:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.372 20:35:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58022 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 58022 ']' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.372 20:35:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.629 [2024-11-26 20:35:17.533261] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:22.629 [2024-11-26 20:35:17.533753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58022 ] 00:07:22.887 [2024-11-26 20:35:17.741156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.144 [2024-11-26 20:35:17.954666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:23.144 [2024-11-26 20:35:17.954766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58022' to capture a snapshot of events at runtime. 00:07:23.144 [2024-11-26 20:35:17.954789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.144 [2024-11-26 20:35:17.954818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.144 [2024-11-26 20:35:17.954835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58022 for offline analysis/debug. 00:07:23.144 [2024-11-26 20:35:17.957194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.517 20:35:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.517 20:35:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.517 20:35:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:24.517 20:35:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:24.517 20:35:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:24.517 20:35:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:24.517 20:35:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.517 20:35:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.517 20:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 ************************************ 00:07:24.517 START TEST rpc_integrity 00:07:24.517 ************************************ 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:24.517 { 00:07:24.517 "name": "Malloc0", 00:07:24.517 "aliases": [ 00:07:24.517 "ce5582e5-c262-47db-ade4-d280e42bd68b" 00:07:24.517 ], 00:07:24.517 "product_name": "Malloc disk", 00:07:24.517 "block_size": 512, 00:07:24.517 "num_blocks": 16384, 00:07:24.517 "uuid": "ce5582e5-c262-47db-ade4-d280e42bd68b", 00:07:24.517 "assigned_rate_limits": { 00:07:24.517 "rw_ios_per_sec": 0, 00:07:24.517 "rw_mbytes_per_sec": 0, 00:07:24.517 "r_mbytes_per_sec": 0, 00:07:24.517 "w_mbytes_per_sec": 0 00:07:24.517 }, 00:07:24.517 "claimed": false, 00:07:24.517 "zoned": false, 00:07:24.517 "supported_io_types": { 00:07:24.517 "read": true, 00:07:24.517 "write": true, 00:07:24.517 "unmap": true, 00:07:24.517 "flush": true, 00:07:24.517 "reset": true, 00:07:24.517 "nvme_admin": false, 00:07:24.517 "nvme_io": false, 00:07:24.517 "nvme_io_md": false, 00:07:24.517 "write_zeroes": true, 00:07:24.517 "zcopy": true, 00:07:24.517 "get_zone_info": false, 00:07:24.517 "zone_management": false, 00:07:24.517 "zone_append": false, 00:07:24.517 "compare": false, 00:07:24.517 "compare_and_write": false, 00:07:24.517 "abort": true, 00:07:24.517 "seek_hole": false, 00:07:24.517 "seek_data": false, 00:07:24.517 "copy": true, 00:07:24.517 "nvme_iov_md": false 00:07:24.517 }, 00:07:24.517 "memory_domains": [ 00:07:24.517 { 00:07:24.517 "dma_device_id": "system", 00:07:24.517 "dma_device_type": 1 00:07:24.517 }, 00:07:24.517 { 00:07:24.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.517 "dma_device_type": 2 00:07:24.517 } 00:07:24.517 ], 00:07:24.517 "driver_specific": {} 00:07:24.517 } 00:07:24.517 ]' 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 [2024-11-26 20:35:19.385388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:24.517 [2024-11-26 20:35:19.385487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.517 [2024-11-26 20:35:19.385538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:24.517 [2024-11-26 20:35:19.385559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.517 [2024-11-26 20:35:19.389297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.517 [2024-11-26 20:35:19.389476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:24.517 Passthru0 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.517 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.517 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:24.517 { 00:07:24.517 "name": "Malloc0", 00:07:24.517 "aliases": [ 00:07:24.517 "ce5582e5-c262-47db-ade4-d280e42bd68b" 00:07:24.517 ], 00:07:24.517 "product_name": "Malloc disk", 00:07:24.517 "block_size": 512, 00:07:24.517 "num_blocks": 16384, 00:07:24.517 "uuid": "ce5582e5-c262-47db-ade4-d280e42bd68b", 00:07:24.518 "assigned_rate_limits": { 00:07:24.518 "rw_ios_per_sec": 0, 00:07:24.518 "rw_mbytes_per_sec": 0, 00:07:24.518 "r_mbytes_per_sec": 0, 00:07:24.518 "w_mbytes_per_sec": 0 00:07:24.518 }, 00:07:24.518 "claimed": true, 00:07:24.518 "claim_type": "exclusive_write", 00:07:24.518 "zoned": false, 00:07:24.518 "supported_io_types": { 00:07:24.518 "read": true, 00:07:24.518 "write": true, 00:07:24.518 "unmap": true, 00:07:24.518 "flush": true, 00:07:24.518 "reset": true, 00:07:24.518 "nvme_admin": false, 00:07:24.518 "nvme_io": false, 00:07:24.518 "nvme_io_md": false, 00:07:24.518 "write_zeroes": true, 00:07:24.518 "zcopy": true, 00:07:24.518 "get_zone_info": false, 00:07:24.518 "zone_management": false, 00:07:24.518 "zone_append": false, 00:07:24.518 "compare": false, 00:07:24.518 "compare_and_write": false, 00:07:24.518 "abort": true, 00:07:24.518 "seek_hole": false, 00:07:24.518 "seek_data": false, 00:07:24.518 "copy": true, 00:07:24.518 "nvme_iov_md": false 00:07:24.518 }, 00:07:24.518 "memory_domains": [ 00:07:24.518 { 00:07:24.518 "dma_device_id": "system", 00:07:24.518 "dma_device_type": 1 00:07:24.518 }, 00:07:24.518 { 00:07:24.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.518 "dma_device_type": 2 00:07:24.518 } 00:07:24.518 ], 00:07:24.518 "driver_specific": {} 00:07:24.518 }, 00:07:24.518 { 00:07:24.518 "name": "Passthru0", 00:07:24.518 "aliases": [ 00:07:24.518 "480b5c51-931a-5c7e-aca9-b999d32b0634" 00:07:24.518 ], 00:07:24.518 "product_name": "passthru", 00:07:24.518 "block_size": 512, 00:07:24.518 "num_blocks": 16384, 00:07:24.518 "uuid": "480b5c51-931a-5c7e-aca9-b999d32b0634", 00:07:24.518 "assigned_rate_limits": { 00:07:24.518 "rw_ios_per_sec": 0, 00:07:24.518 "rw_mbytes_per_sec": 0, 00:07:24.518 "r_mbytes_per_sec": 0, 00:07:24.518 "w_mbytes_per_sec": 0 00:07:24.518 }, 00:07:24.518 "claimed": false, 00:07:24.518 "zoned": false, 00:07:24.518 "supported_io_types": { 00:07:24.518 "read": true, 00:07:24.518 "write": true, 00:07:24.518 "unmap": true, 00:07:24.518 "flush": true, 00:07:24.518 "reset": true, 00:07:24.518 "nvme_admin": false, 00:07:24.518 "nvme_io": false, 00:07:24.518 "nvme_io_md": false, 00:07:24.518 "write_zeroes": true, 00:07:24.518 "zcopy": true, 00:07:24.518 "get_zone_info": false, 00:07:24.518 "zone_management": false, 00:07:24.518 "zone_append": false, 00:07:24.518 "compare": false, 00:07:24.518 "compare_and_write": false, 00:07:24.518 "abort": true, 00:07:24.518 "seek_hole": false, 00:07:24.518 "seek_data": false, 00:07:24.518 "copy": true, 00:07:24.518 "nvme_iov_md": false 00:07:24.518 }, 00:07:24.518 "memory_domains": [ 00:07:24.518 { 00:07:24.518 "dma_device_id": "system", 00:07:24.518 "dma_device_type": 1 00:07:24.518 }, 00:07:24.518 { 00:07:24.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.518 "dma_device_type": 2 00:07:24.518 } 00:07:24.518 ], 00:07:24.518 "driver_specific": { 00:07:24.518 "passthru": { 00:07:24.518 "name": "Passthru0", 00:07:24.518 "base_bdev_name": "Malloc0" 00:07:24.518 } 00:07:24.518 } 00:07:24.518 } 00:07:24.518 ]' 00:07:24.518 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:24.518 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:24.518 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:24.518 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.518 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.518 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.518 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:24.518 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.518 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.776 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.776 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:24.776 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:24.776 20:35:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:24.776 00:07:24.776 real 0m0.370s 00:07:24.776 user 0m0.196s 00:07:24.776 sys 0m0.052s 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.776 20:35:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 ************************************ 00:07:24.776 END TEST rpc_integrity 00:07:24.776 ************************************ 00:07:24.776 20:35:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:24.776 20:35:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.776 20:35:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.776 20:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 ************************************ 00:07:24.776 START TEST rpc_plugins 00:07:24.776 ************************************ 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:24.776 { 00:07:24.776 "name": "Malloc1", 00:07:24.776 "aliases": [ 00:07:24.776 "4d60311d-1754-4bcc-a398-0f8959fde77c" 00:07:24.776 ], 00:07:24.776 "product_name": "Malloc disk", 00:07:24.776 "block_size": 4096, 00:07:24.776 "num_blocks": 256, 00:07:24.776 "uuid": "4d60311d-1754-4bcc-a398-0f8959fde77c", 00:07:24.776 "assigned_rate_limits": { 00:07:24.776 "rw_ios_per_sec": 0, 00:07:24.776 "rw_mbytes_per_sec": 0, 00:07:24.776 "r_mbytes_per_sec": 0, 00:07:24.776 "w_mbytes_per_sec": 0 00:07:24.776 }, 00:07:24.776 "claimed": false, 00:07:24.776 "zoned": false, 00:07:24.776 "supported_io_types": { 00:07:24.776 "read": true, 00:07:24.776 "write": true, 00:07:24.776 "unmap": true, 00:07:24.776 "flush": true, 00:07:24.776 "reset": true, 00:07:24.776 "nvme_admin": false, 00:07:24.776 "nvme_io": false, 00:07:24.776 "nvme_io_md": false, 00:07:24.776 "write_zeroes": true, 00:07:24.776 "zcopy": true, 00:07:24.776 "get_zone_info": false, 00:07:24.776 "zone_management": false, 00:07:24.776 "zone_append": false, 00:07:24.776 "compare": false, 00:07:24.776 "compare_and_write": false, 00:07:24.776 "abort": true, 00:07:24.776 "seek_hole": false, 00:07:24.776 "seek_data": false, 00:07:24.776 "copy": true, 00:07:24.776 "nvme_iov_md": false 00:07:24.776 }, 00:07:24.776 "memory_domains": [ 00:07:24.776 { 00:07:24.776 "dma_device_id": "system", 00:07:24.776 "dma_device_type": 1 00:07:24.776 }, 00:07:24.776 { 00:07:24.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.776 "dma_device_type": 2 00:07:24.776 } 00:07:24.776 ], 00:07:24.776 "driver_specific": {} 00:07:24.776 } 00:07:24.776 ]' 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:24.776 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.776 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.034 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.034 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:25.035 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.035 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.035 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.035 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:25.035 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:25.035 ************************************ 00:07:25.035 END TEST rpc_plugins 00:07:25.035 ************************************ 00:07:25.035 20:35:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:25.035 00:07:25.035 real 0m0.204s 00:07:25.035 user 0m0.123s 00:07:25.035 sys 0m0.021s 00:07:25.035 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.035 20:35:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:25.035 20:35:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:25.035 20:35:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.035 20:35:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.035 20:35:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.035 ************************************ 00:07:25.035 START TEST rpc_trace_cmd_test 00:07:25.035 ************************************ 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:25.035 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58022", 00:07:25.035 "tpoint_group_mask": "0x8", 00:07:25.035 "iscsi_conn": { 00:07:25.035 "mask": "0x2", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "scsi": { 00:07:25.035 "mask": "0x4", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "bdev": { 00:07:25.035 "mask": "0x8", 00:07:25.035 "tpoint_mask": "0xffffffffffffffff" 00:07:25.035 }, 00:07:25.035 "nvmf_rdma": { 00:07:25.035 "mask": "0x10", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "nvmf_tcp": { 00:07:25.035 "mask": "0x20", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "ftl": { 00:07:25.035 "mask": "0x40", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "blobfs": { 00:07:25.035 "mask": "0x80", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "dsa": { 00:07:25.035 "mask": "0x200", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "thread": { 00:07:25.035 "mask": "0x400", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "nvme_pcie": { 00:07:25.035 "mask": "0x800", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "iaa": { 00:07:25.035 "mask": "0x1000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "nvme_tcp": { 00:07:25.035 "mask": "0x2000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "bdev_nvme": { 00:07:25.035 "mask": "0x4000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "sock": { 00:07:25.035 "mask": "0x8000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "blob": { 00:07:25.035 "mask": "0x10000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "bdev_raid": { 00:07:25.035 "mask": "0x20000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 }, 00:07:25.035 "scheduler": { 00:07:25.035 "mask": "0x40000", 00:07:25.035 "tpoint_mask": "0x0" 00:07:25.035 } 00:07:25.035 }' 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:25.035 20:35:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:25.035 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:25.035 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:25.292 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:25.292 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:25.292 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:25.292 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:25.292 ************************************ 00:07:25.292 END TEST rpc_trace_cmd_test 00:07:25.292 ************************************ 00:07:25.293 20:35:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:25.293 00:07:25.293 real 0m0.223s 00:07:25.293 user 0m0.172s 00:07:25.293 sys 0m0.041s 00:07:25.293 20:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.293 20:35:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.293 20:35:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:25.293 20:35:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:25.293 20:35:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:25.293 20:35:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.293 20:35:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.293 20:35:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.293 ************************************ 00:07:25.293 START TEST rpc_daemon_integrity 00:07:25.293 ************************************ 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.293 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:25.551 { 00:07:25.551 "name": "Malloc2", 00:07:25.551 "aliases": [ 00:07:25.551 "d8c487c3-9e73-4247-9d27-85bfdf0982aa" 00:07:25.551 ], 00:07:25.551 "product_name": "Malloc disk", 00:07:25.551 "block_size": 512, 00:07:25.551 "num_blocks": 16384, 00:07:25.551 "uuid": "d8c487c3-9e73-4247-9d27-85bfdf0982aa", 00:07:25.551 "assigned_rate_limits": { 00:07:25.551 "rw_ios_per_sec": 0, 00:07:25.551 "rw_mbytes_per_sec": 0, 00:07:25.551 "r_mbytes_per_sec": 0, 00:07:25.551 "w_mbytes_per_sec": 0 00:07:25.551 }, 00:07:25.551 "claimed": false, 00:07:25.551 "zoned": false, 00:07:25.551 "supported_io_types": { 00:07:25.551 "read": true, 00:07:25.551 "write": true, 00:07:25.551 "unmap": true, 00:07:25.551 "flush": true, 00:07:25.551 "reset": true, 00:07:25.551 "nvme_admin": false, 00:07:25.551 "nvme_io": false, 00:07:25.551 "nvme_io_md": false, 00:07:25.551 "write_zeroes": true, 00:07:25.551 "zcopy": true, 00:07:25.551 "get_zone_info": false, 00:07:25.551 "zone_management": false, 00:07:25.551 "zone_append": false, 00:07:25.551 "compare": false, 00:07:25.551 "compare_and_write": false, 00:07:25.551 "abort": true, 00:07:25.551 "seek_hole": false, 00:07:25.551 "seek_data": false, 00:07:25.551 "copy": true, 00:07:25.551 "nvme_iov_md": false 00:07:25.551 }, 00:07:25.551 "memory_domains": [ 00:07:25.551 { 00:07:25.551 "dma_device_id": "system", 00:07:25.551 "dma_device_type": 1 00:07:25.551 }, 00:07:25.551 { 00:07:25.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.551 "dma_device_type": 2 00:07:25.551 } 00:07:25.551 ], 00:07:25.551 "driver_specific": {} 00:07:25.551 } 00:07:25.551 ]' 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.551 [2024-11-26 20:35:20.352375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:25.551 [2024-11-26 20:35:20.352469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.551 [2024-11-26 20:35:20.352502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:25.551 [2024-11-26 20:35:20.352521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.551 [2024-11-26 20:35:20.355961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.551 [2024-11-26 20:35:20.356010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:25.551 Passthru0 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.551 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:25.551 { 00:07:25.551 "name": "Malloc2", 00:07:25.551 "aliases": [ 00:07:25.551 "d8c487c3-9e73-4247-9d27-85bfdf0982aa" 00:07:25.551 ], 00:07:25.551 "product_name": "Malloc disk", 00:07:25.551 "block_size": 512, 00:07:25.551 "num_blocks": 16384, 00:07:25.551 "uuid": "d8c487c3-9e73-4247-9d27-85bfdf0982aa", 00:07:25.551 "assigned_rate_limits": { 00:07:25.551 "rw_ios_per_sec": 0, 00:07:25.551 "rw_mbytes_per_sec": 0, 00:07:25.551 "r_mbytes_per_sec": 0, 00:07:25.551 "w_mbytes_per_sec": 0 00:07:25.551 }, 00:07:25.551 "claimed": true, 00:07:25.551 "claim_type": "exclusive_write", 00:07:25.551 "zoned": false, 00:07:25.551 "supported_io_types": { 00:07:25.551 "read": true, 00:07:25.551 "write": true, 00:07:25.551 "unmap": true, 00:07:25.551 "flush": true, 00:07:25.551 "reset": true, 00:07:25.551 "nvme_admin": false, 00:07:25.551 "nvme_io": false, 00:07:25.551 "nvme_io_md": false, 00:07:25.551 "write_zeroes": true, 00:07:25.551 "zcopy": true, 00:07:25.551 "get_zone_info": false, 00:07:25.551 "zone_management": false, 00:07:25.551 "zone_append": false, 00:07:25.551 "compare": false, 00:07:25.551 "compare_and_write": false, 00:07:25.551 "abort": true, 00:07:25.551 "seek_hole": false, 00:07:25.551 "seek_data": false, 00:07:25.551 "copy": true, 00:07:25.551 "nvme_iov_md": false 00:07:25.551 }, 00:07:25.551 "memory_domains": [ 00:07:25.551 { 00:07:25.551 "dma_device_id": "system", 00:07:25.551 "dma_device_type": 1 00:07:25.551 }, 00:07:25.551 { 00:07:25.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.552 "dma_device_type": 2 00:07:25.552 } 00:07:25.552 ], 00:07:25.552 "driver_specific": {} 00:07:25.552 }, 00:07:25.552 { 00:07:25.552 "name": "Passthru0", 00:07:25.552 "aliases": [ 00:07:25.552 "93bd82c9-3050-5699-97be-95b61debe004" 00:07:25.552 ], 00:07:25.552 "product_name": "passthru", 00:07:25.552 "block_size": 512, 00:07:25.552 "num_blocks": 16384, 00:07:25.552 "uuid": "93bd82c9-3050-5699-97be-95b61debe004", 00:07:25.552 "assigned_rate_limits": { 00:07:25.552 "rw_ios_per_sec": 0, 00:07:25.552 "rw_mbytes_per_sec": 0, 00:07:25.552 "r_mbytes_per_sec": 0, 00:07:25.552 "w_mbytes_per_sec": 0 00:07:25.552 }, 00:07:25.552 "claimed": false, 00:07:25.552 "zoned": false, 00:07:25.552 "supported_io_types": { 00:07:25.552 "read": true, 00:07:25.552 "write": true, 00:07:25.552 "unmap": true, 00:07:25.552 "flush": true, 00:07:25.552 "reset": true, 00:07:25.552 "nvme_admin": false, 00:07:25.552 "nvme_io": false, 00:07:25.552 "nvme_io_md": false, 00:07:25.552 "write_zeroes": true, 00:07:25.552 "zcopy": true, 00:07:25.552 "get_zone_info": false, 00:07:25.552 "zone_management": false, 00:07:25.552 "zone_append": false, 00:07:25.552 "compare": false, 00:07:25.552 "compare_and_write": false, 00:07:25.552 "abort": true, 00:07:25.552 "seek_hole": false, 00:07:25.552 "seek_data": false, 00:07:25.552 "copy": true, 00:07:25.552 "nvme_iov_md": false 00:07:25.552 }, 00:07:25.552 "memory_domains": [ 00:07:25.552 { 00:07:25.552 "dma_device_id": "system", 00:07:25.552 "dma_device_type": 1 00:07:25.552 }, 00:07:25.552 { 00:07:25.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.552 "dma_device_type": 2 00:07:25.552 } 00:07:25.552 ], 00:07:25.552 "driver_specific": { 00:07:25.552 "passthru": { 00:07:25.552 "name": "Passthru0", 00:07:25.552 "base_bdev_name": "Malloc2" 00:07:25.552 } 00:07:25.552 } 00:07:25.552 } 00:07:25.552 ]' 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:25.552 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:25.810 ************************************ 00:07:25.810 END TEST rpc_daemon_integrity 00:07:25.810 ************************************ 00:07:25.810 20:35:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:25.810 00:07:25.810 real 0m0.370s 00:07:25.810 user 0m0.202s 00:07:25.810 sys 0m0.057s 00:07:25.810 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.810 20:35:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.810 20:35:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:25.810 20:35:20 rpc -- rpc/rpc.sh@84 -- # killprocess 58022 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 58022 ']' 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@958 -- # kill -0 58022 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@959 -- # uname 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58022 00:07:25.810 killing process with pid 58022 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58022' 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@973 -- # kill 58022 00:07:25.810 20:35:20 rpc -- common/autotest_common.sh@978 -- # wait 58022 00:07:29.095 00:07:29.095 real 0m6.614s 00:07:29.095 user 0m6.929s 00:07:29.095 sys 0m1.256s 00:07:29.095 20:35:23 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.095 ************************************ 00:07:29.095 END TEST rpc 00:07:29.095 ************************************ 00:07:29.095 20:35:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 20:35:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:29.095 20:35:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.095 20:35:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.095 20:35:23 -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 ************************************ 00:07:29.095 START TEST skip_rpc 00:07:29.095 ************************************ 00:07:29.095 20:35:23 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:29.095 * Looking for test storage... 00:07:29.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:29.095 20:35:23 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.095 20:35:23 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.095 20:35:23 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.095 20:35:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.095 --rc genhtml_branch_coverage=1 00:07:29.095 --rc genhtml_function_coverage=1 00:07:29.095 --rc genhtml_legend=1 00:07:29.095 --rc geninfo_all_blocks=1 00:07:29.095 --rc geninfo_unexecuted_blocks=1 00:07:29.095 00:07:29.095 ' 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.095 --rc genhtml_branch_coverage=1 00:07:29.095 --rc genhtml_function_coverage=1 00:07:29.095 --rc genhtml_legend=1 00:07:29.095 --rc geninfo_all_blocks=1 00:07:29.095 --rc geninfo_unexecuted_blocks=1 00:07:29.095 00:07:29.095 ' 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.095 --rc genhtml_branch_coverage=1 00:07:29.095 --rc genhtml_function_coverage=1 00:07:29.095 --rc genhtml_legend=1 00:07:29.095 --rc geninfo_all_blocks=1 00:07:29.095 --rc geninfo_unexecuted_blocks=1 00:07:29.095 00:07:29.095 ' 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.095 --rc genhtml_branch_coverage=1 00:07:29.095 --rc genhtml_function_coverage=1 00:07:29.095 --rc genhtml_legend=1 00:07:29.095 --rc geninfo_all_blocks=1 00:07:29.095 --rc geninfo_unexecuted_blocks=1 00:07:29.095 00:07:29.095 ' 00:07:29.095 20:35:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:29.095 20:35:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:29.095 20:35:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:29.095 20:35:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.096 20:35:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.096 20:35:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 ************************************ 00:07:29.096 START TEST skip_rpc 00:07:29.096 ************************************ 00:07:29.096 20:35:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:29.096 20:35:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58267 00:07:29.096 20:35:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:29.096 20:35:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:29.096 20:35:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:29.354 [2024-11-26 20:35:24.204878] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:29.354 [2024-11-26 20:35:24.205361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58267 ] 00:07:29.612 [2024-11-26 20:35:24.398371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.612 [2024-11-26 20:35:24.589447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58267 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58267 ']' 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58267 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58267 00:07:34.912 killing process with pid 58267 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58267' 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58267 00:07:34.912 20:35:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58267 00:07:37.441 ************************************ 00:07:37.441 END TEST skip_rpc 00:07:37.441 ************************************ 00:07:37.441 00:07:37.441 real 0m8.045s 00:07:37.441 user 0m7.345s 00:07:37.441 sys 0m0.595s 00:07:37.441 20:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.441 20:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.441 20:35:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:37.441 20:35:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.441 20:35:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.441 20:35:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.441 ************************************ 00:07:37.441 START TEST skip_rpc_with_json 00:07:37.441 ************************************ 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58377 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58377 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58377 ']' 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.441 20:35:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:37.441 [2024-11-26 20:35:32.311694] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:37.441 [2024-11-26 20:35:32.312154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58377 ] 00:07:37.700 [2024-11-26 20:35:32.505074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.700 [2024-11-26 20:35:32.675585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 [2024-11-26 20:35:33.803337] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:39.080 request: 00:07:39.080 { 00:07:39.080 "trtype": "tcp", 00:07:39.080 "method": "nvmf_get_transports", 00:07:39.080 "req_id": 1 00:07:39.080 } 00:07:39.080 Got JSON-RPC error response 00:07:39.080 response: 00:07:39.080 { 00:07:39.080 "code": -19, 00:07:39.080 "message": "No such device" 00:07:39.080 } 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 [2024-11-26 20:35:33.815515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.080 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:39.080 { 00:07:39.080 "subsystems": [ 00:07:39.080 { 00:07:39.080 "subsystem": "fsdev", 00:07:39.080 "config": [ 00:07:39.080 { 00:07:39.080 "method": "fsdev_set_opts", 00:07:39.080 "params": { 00:07:39.080 "fsdev_io_pool_size": 65535, 00:07:39.080 "fsdev_io_cache_size": 256 00:07:39.080 } 00:07:39.080 } 00:07:39.080 ] 00:07:39.080 }, 00:07:39.080 { 00:07:39.080 "subsystem": "keyring", 00:07:39.080 "config": [] 00:07:39.080 }, 00:07:39.080 { 00:07:39.080 "subsystem": "iobuf", 00:07:39.080 "config": [ 00:07:39.080 { 00:07:39.080 "method": "iobuf_set_options", 00:07:39.080 "params": { 00:07:39.080 "small_pool_count": 8192, 00:07:39.080 "large_pool_count": 1024, 00:07:39.080 "small_bufsize": 8192, 00:07:39.080 "large_bufsize": 135168, 00:07:39.080 "enable_numa": false 00:07:39.080 } 00:07:39.080 } 00:07:39.080 ] 00:07:39.080 }, 00:07:39.080 { 00:07:39.080 "subsystem": "sock", 00:07:39.080 "config": [ 00:07:39.080 { 00:07:39.080 "method": "sock_set_default_impl", 00:07:39.080 "params": { 00:07:39.080 "impl_name": "posix" 00:07:39.080 } 00:07:39.080 }, 00:07:39.080 { 00:07:39.080 "method": "sock_impl_set_options", 00:07:39.080 "params": { 00:07:39.080 "impl_name": "ssl", 00:07:39.080 "recv_buf_size": 4096, 00:07:39.080 "send_buf_size": 4096, 00:07:39.080 "enable_recv_pipe": true, 00:07:39.080 "enable_quickack": false, 00:07:39.080 "enable_placement_id": 0, 00:07:39.080 "enable_zerocopy_send_server": true, 00:07:39.080 "enable_zerocopy_send_client": false, 00:07:39.080 "zerocopy_threshold": 0, 00:07:39.080 "tls_version": 0, 00:07:39.080 "enable_ktls": false 00:07:39.080 } 00:07:39.080 }, 00:07:39.080 { 00:07:39.080 "method": "sock_impl_set_options", 00:07:39.080 "params": { 00:07:39.080 "impl_name": "posix", 00:07:39.080 "recv_buf_size": 2097152, 00:07:39.080 "send_buf_size": 2097152, 00:07:39.080 "enable_recv_pipe": true, 00:07:39.080 "enable_quickack": false, 00:07:39.080 "enable_placement_id": 0, 00:07:39.080 "enable_zerocopy_send_server": true, 00:07:39.081 "enable_zerocopy_send_client": false, 00:07:39.081 "zerocopy_threshold": 0, 00:07:39.081 "tls_version": 0, 00:07:39.081 "enable_ktls": false 00:07:39.081 } 00:07:39.081 } 00:07:39.081 ] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "vmd", 00:07:39.081 "config": [] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "accel", 00:07:39.081 "config": [ 00:07:39.081 { 00:07:39.081 "method": "accel_set_options", 00:07:39.081 "params": { 00:07:39.081 "small_cache_size": 128, 00:07:39.081 "large_cache_size": 16, 00:07:39.081 "task_count": 2048, 00:07:39.081 "sequence_count": 2048, 00:07:39.081 "buf_count": 2048 00:07:39.081 } 00:07:39.081 } 00:07:39.081 ] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "bdev", 00:07:39.081 "config": [ 00:07:39.081 { 00:07:39.081 "method": "bdev_set_options", 00:07:39.081 "params": { 00:07:39.081 "bdev_io_pool_size": 65535, 00:07:39.081 "bdev_io_cache_size": 256, 00:07:39.081 "bdev_auto_examine": true, 00:07:39.081 "iobuf_small_cache_size": 128, 00:07:39.081 "iobuf_large_cache_size": 16 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "bdev_raid_set_options", 00:07:39.081 "params": { 00:07:39.081 "process_window_size_kb": 1024, 00:07:39.081 "process_max_bandwidth_mb_sec": 0 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "bdev_iscsi_set_options", 00:07:39.081 "params": { 00:07:39.081 "timeout_sec": 30 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "bdev_nvme_set_options", 00:07:39.081 "params": { 00:07:39.081 "action_on_timeout": "none", 00:07:39.081 "timeout_us": 0, 00:07:39.081 "timeout_admin_us": 0, 00:07:39.081 "keep_alive_timeout_ms": 10000, 00:07:39.081 "arbitration_burst": 0, 00:07:39.081 "low_priority_weight": 0, 00:07:39.081 "medium_priority_weight": 0, 00:07:39.081 "high_priority_weight": 0, 00:07:39.081 "nvme_adminq_poll_period_us": 10000, 00:07:39.081 "nvme_ioq_poll_period_us": 0, 00:07:39.081 "io_queue_requests": 0, 00:07:39.081 "delay_cmd_submit": true, 00:07:39.081 "transport_retry_count": 4, 00:07:39.081 "bdev_retry_count": 3, 00:07:39.081 "transport_ack_timeout": 0, 00:07:39.081 "ctrlr_loss_timeout_sec": 0, 00:07:39.081 "reconnect_delay_sec": 0, 00:07:39.081 "fast_io_fail_timeout_sec": 0, 00:07:39.081 "disable_auto_failback": false, 00:07:39.081 "generate_uuids": false, 00:07:39.081 "transport_tos": 0, 00:07:39.081 "nvme_error_stat": false, 00:07:39.081 "rdma_srq_size": 0, 00:07:39.081 "io_path_stat": false, 00:07:39.081 "allow_accel_sequence": false, 00:07:39.081 "rdma_max_cq_size": 0, 00:07:39.081 "rdma_cm_event_timeout_ms": 0, 00:07:39.081 "dhchap_digests": [ 00:07:39.081 "sha256", 00:07:39.081 "sha384", 00:07:39.081 "sha512" 00:07:39.081 ], 00:07:39.081 "dhchap_dhgroups": [ 00:07:39.081 "null", 00:07:39.081 "ffdhe2048", 00:07:39.081 "ffdhe3072", 00:07:39.081 "ffdhe4096", 00:07:39.081 "ffdhe6144", 00:07:39.081 "ffdhe8192" 00:07:39.081 ] 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "bdev_nvme_set_hotplug", 00:07:39.081 "params": { 00:07:39.081 "period_us": 100000, 00:07:39.081 "enable": false 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "bdev_wait_for_examine" 00:07:39.081 } 00:07:39.081 ] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "scsi", 00:07:39.081 "config": null 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "scheduler", 00:07:39.081 "config": [ 00:07:39.081 { 00:07:39.081 "method": "framework_set_scheduler", 00:07:39.081 "params": { 00:07:39.081 "name": "static" 00:07:39.081 } 00:07:39.081 } 00:07:39.081 ] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "vhost_scsi", 00:07:39.081 "config": [] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "vhost_blk", 00:07:39.081 "config": [] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "ublk", 00:07:39.081 "config": [] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "nbd", 00:07:39.081 "config": [] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "nvmf", 00:07:39.081 "config": [ 00:07:39.081 { 00:07:39.081 "method": "nvmf_set_config", 00:07:39.081 "params": { 00:07:39.081 "discovery_filter": "match_any", 00:07:39.081 "admin_cmd_passthru": { 00:07:39.081 "identify_ctrlr": false 00:07:39.081 }, 00:07:39.081 "dhchap_digests": [ 00:07:39.081 "sha256", 00:07:39.081 "sha384", 00:07:39.081 "sha512" 00:07:39.081 ], 00:07:39.081 "dhchap_dhgroups": [ 00:07:39.081 "null", 00:07:39.081 "ffdhe2048", 00:07:39.081 "ffdhe3072", 00:07:39.081 "ffdhe4096", 00:07:39.081 "ffdhe6144", 00:07:39.081 "ffdhe8192" 00:07:39.081 ] 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "nvmf_set_max_subsystems", 00:07:39.081 "params": { 00:07:39.081 "max_subsystems": 1024 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "nvmf_set_crdt", 00:07:39.081 "params": { 00:07:39.081 "crdt1": 0, 00:07:39.081 "crdt2": 0, 00:07:39.081 "crdt3": 0 00:07:39.081 } 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "method": "nvmf_create_transport", 00:07:39.081 "params": { 00:07:39.081 "trtype": "TCP", 00:07:39.081 "max_queue_depth": 128, 00:07:39.081 "max_io_qpairs_per_ctrlr": 127, 00:07:39.081 "in_capsule_data_size": 4096, 00:07:39.081 "max_io_size": 131072, 00:07:39.081 "io_unit_size": 131072, 00:07:39.081 "max_aq_depth": 128, 00:07:39.081 "num_shared_buffers": 511, 00:07:39.081 "buf_cache_size": 4294967295, 00:07:39.081 "dif_insert_or_strip": false, 00:07:39.081 "zcopy": false, 00:07:39.081 "c2h_success": true, 00:07:39.081 "sock_priority": 0, 00:07:39.081 "abort_timeout_sec": 1, 00:07:39.081 "ack_timeout": 0, 00:07:39.081 "data_wr_pool_size": 0 00:07:39.081 } 00:07:39.081 } 00:07:39.081 ] 00:07:39.081 }, 00:07:39.081 { 00:07:39.081 "subsystem": "iscsi", 00:07:39.081 "config": [ 00:07:39.081 { 00:07:39.081 "method": "iscsi_set_options", 00:07:39.081 "params": { 00:07:39.081 "node_base": "iqn.2016-06.io.spdk", 00:07:39.081 "max_sessions": 128, 00:07:39.081 "max_connections_per_session": 2, 00:07:39.081 "max_queue_depth": 64, 00:07:39.081 "default_time2wait": 2, 00:07:39.081 "default_time2retain": 20, 00:07:39.081 "first_burst_length": 8192, 00:07:39.081 "immediate_data": true, 00:07:39.081 "allow_duplicated_isid": false, 00:07:39.081 "error_recovery_level": 0, 00:07:39.081 "nop_timeout": 60, 00:07:39.081 "nop_in_interval": 30, 00:07:39.081 "disable_chap": false, 00:07:39.081 "require_chap": false, 00:07:39.081 "mutual_chap": false, 00:07:39.081 "chap_group": 0, 00:07:39.081 "max_large_datain_per_connection": 64, 00:07:39.081 "max_r2t_per_connection": 4, 00:07:39.082 "pdu_pool_size": 36864, 00:07:39.082 "immediate_data_pool_size": 16384, 00:07:39.082 "data_out_pool_size": 2048 00:07:39.082 } 00:07:39.082 } 00:07:39.082 ] 00:07:39.082 } 00:07:39.082 ] 00:07:39.082 } 00:07:39.082 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:39.082 20:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58377 00:07:39.082 20:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58377 ']' 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58377 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58377 00:07:39.082 killing process with pid 58377 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58377' 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58377 00:07:39.082 20:35:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58377 00:07:42.365 20:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58444 00:07:42.365 20:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:42.365 20:35:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58444 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58444 ']' 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58444 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58444 00:07:47.637 killing process with pid 58444 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58444' 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58444 00:07:47.637 20:35:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58444 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:50.168 ************************************ 00:07:50.168 END TEST skip_rpc_with_json 00:07:50.168 ************************************ 00:07:50.168 00:07:50.168 real 0m12.581s 00:07:50.168 user 0m11.936s 00:07:50.168 sys 0m1.100s 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:50.168 20:35:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:50.168 20:35:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.168 20:35:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.168 20:35:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.168 ************************************ 00:07:50.168 START TEST skip_rpc_with_delay 00:07:50.168 ************************************ 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:50.168 20:35:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:50.168 [2024-11-26 20:35:44.944553] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:50.168 ************************************ 00:07:50.168 END TEST skip_rpc_with_delay 00:07:50.168 ************************************ 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.168 00:07:50.168 real 0m0.217s 00:07:50.168 user 0m0.095s 00:07:50.168 sys 0m0.118s 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.168 20:35:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:50.168 20:35:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:50.168 20:35:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:50.168 20:35:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:50.168 20:35:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.168 20:35:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.168 20:35:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.168 ************************************ 00:07:50.168 START TEST exit_on_failed_rpc_init 00:07:50.169 ************************************ 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58583 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58583 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58583 ']' 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.169 20:35:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:50.427 [2024-11-26 20:35:45.253608] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:50.427 [2024-11-26 20:35:45.253806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58583 ] 00:07:50.687 [2024-11-26 20:35:45.457029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.687 [2024-11-26 20:35:45.622395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:52.062 20:35:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:52.062 [2024-11-26 20:35:46.814141] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:52.062 [2024-11-26 20:35:46.814578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58601 ] 00:07:52.062 [2024-11-26 20:35:47.016221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.320 [2024-11-26 20:35:47.191100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.320 [2024-11-26 20:35:47.191465] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:52.320 [2024-11-26 20:35:47.191638] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:52.320 [2024-11-26 20:35:47.191751] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58583 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58583 ']' 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58583 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58583 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58583' 00:07:52.578 killing process with pid 58583 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58583 00:07:52.578 20:35:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58583 00:07:55.946 00:07:55.946 real 0m5.468s 00:07:55.946 user 0m6.048s 00:07:55.946 sys 0m0.743s 00:07:55.946 20:35:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.946 20:35:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:55.946 ************************************ 00:07:55.946 END TEST exit_on_failed_rpc_init 00:07:55.946 ************************************ 00:07:55.946 20:35:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:55.946 ************************************ 00:07:55.946 END TEST skip_rpc 00:07:55.946 ************************************ 00:07:55.946 00:07:55.946 real 0m26.769s 00:07:55.946 user 0m25.639s 00:07:55.946 sys 0m2.802s 00:07:55.946 20:35:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.946 20:35:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.946 20:35:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:55.946 20:35:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.946 20:35:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.946 20:35:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.946 ************************************ 00:07:55.946 START TEST rpc_client 00:07:55.946 ************************************ 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:55.946 * Looking for test storage... 00:07:55.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.946 20:35:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.946 --rc genhtml_branch_coverage=1 00:07:55.946 --rc genhtml_function_coverage=1 00:07:55.946 --rc genhtml_legend=1 00:07:55.946 --rc geninfo_all_blocks=1 00:07:55.946 --rc geninfo_unexecuted_blocks=1 00:07:55.946 00:07:55.946 ' 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.946 --rc genhtml_branch_coverage=1 00:07:55.946 --rc genhtml_function_coverage=1 00:07:55.946 --rc genhtml_legend=1 00:07:55.946 --rc geninfo_all_blocks=1 00:07:55.946 --rc geninfo_unexecuted_blocks=1 00:07:55.946 00:07:55.946 ' 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.946 --rc genhtml_branch_coverage=1 00:07:55.946 --rc genhtml_function_coverage=1 00:07:55.946 --rc genhtml_legend=1 00:07:55.946 --rc geninfo_all_blocks=1 00:07:55.946 --rc geninfo_unexecuted_blocks=1 00:07:55.946 00:07:55.946 ' 00:07:55.946 20:35:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.946 --rc genhtml_branch_coverage=1 00:07:55.947 --rc genhtml_function_coverage=1 00:07:55.947 --rc genhtml_legend=1 00:07:55.947 --rc geninfo_all_blocks=1 00:07:55.947 --rc geninfo_unexecuted_blocks=1 00:07:55.947 00:07:55.947 ' 00:07:55.947 20:35:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:55.947 OK 00:07:55.947 20:35:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:55.947 00:07:55.947 real 0m0.271s 00:07:55.947 user 0m0.149s 00:07:55.947 sys 0m0.134s 00:07:55.947 20:35:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.947 20:35:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:55.947 ************************************ 00:07:55.947 END TEST rpc_client 00:07:55.947 ************************************ 00:07:56.206 20:35:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:56.206 20:35:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.206 20:35:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.206 20:35:50 -- common/autotest_common.sh@10 -- # set +x 00:07:56.206 ************************************ 00:07:56.206 START TEST json_config 00:07:56.206 ************************************ 00:07:56.206 20:35:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.206 20:35:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.206 20:35:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.206 20:35:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.206 20:35:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.206 20:35:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.206 20:35:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:56.206 20:35:51 json_config -- scripts/common.sh@345 -- # : 1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.206 20:35:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.206 20:35:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@353 -- # local d=1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.206 20:35:51 json_config -- scripts/common.sh@355 -- # echo 1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.206 20:35:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@353 -- # local d=2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.206 20:35:51 json_config -- scripts/common.sh@355 -- # echo 2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.206 20:35:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.206 20:35:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.206 20:35:51 json_config -- scripts/common.sh@368 -- # return 0 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.206 --rc genhtml_branch_coverage=1 00:07:56.206 --rc genhtml_function_coverage=1 00:07:56.206 --rc genhtml_legend=1 00:07:56.206 --rc geninfo_all_blocks=1 00:07:56.206 --rc geninfo_unexecuted_blocks=1 00:07:56.206 00:07:56.206 ' 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.206 --rc genhtml_branch_coverage=1 00:07:56.206 --rc genhtml_function_coverage=1 00:07:56.206 --rc genhtml_legend=1 00:07:56.206 --rc geninfo_all_blocks=1 00:07:56.206 --rc geninfo_unexecuted_blocks=1 00:07:56.206 00:07:56.206 ' 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.206 --rc genhtml_branch_coverage=1 00:07:56.206 --rc genhtml_function_coverage=1 00:07:56.206 --rc genhtml_legend=1 00:07:56.206 --rc geninfo_all_blocks=1 00:07:56.206 --rc geninfo_unexecuted_blocks=1 00:07:56.206 00:07:56.206 ' 00:07:56.206 20:35:51 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.206 --rc genhtml_branch_coverage=1 00:07:56.206 --rc genhtml_function_coverage=1 00:07:56.206 --rc genhtml_legend=1 00:07:56.206 --rc geninfo_all_blocks=1 00:07:56.206 --rc geninfo_unexecuted_blocks=1 00:07:56.206 00:07:56.206 ' 00:07:56.206 20:35:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b667ac01-5336-4eb3-bd57-57a0d0e36562 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b667ac01-5336-4eb3-bd57-57a0d0e36562 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.206 20:35:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.206 20:35:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.206 20:35:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.206 20:35:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.206 20:35:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.206 20:35:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.206 20:35:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.206 20:35:51 json_config -- paths/export.sh@5 -- # export PATH 00:07:56.206 20:35:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@51 -- # : 0 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.206 20:35:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.207 20:35:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.207 20:35:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.207 20:35:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.207 20:35:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.207 20:35:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:56.207 WARNING: No tests are enabled so not running JSON configuration tests 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:56.207 20:35:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:56.207 00:07:56.207 real 0m0.217s 00:07:56.207 user 0m0.140s 00:07:56.207 sys 0m0.076s 00:07:56.207 20:35:51 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.207 ************************************ 00:07:56.207 END TEST json_config 00:07:56.207 20:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:56.207 ************************************ 00:07:56.466 20:35:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:56.466 20:35:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.466 20:35:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.466 20:35:51 -- common/autotest_common.sh@10 -- # set +x 00:07:56.466 ************************************ 00:07:56.466 START TEST json_config_extra_key 00:07:56.466 ************************************ 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.466 20:35:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.466 --rc genhtml_branch_coverage=1 00:07:56.466 --rc genhtml_function_coverage=1 00:07:56.466 --rc genhtml_legend=1 00:07:56.466 --rc geninfo_all_blocks=1 00:07:56.466 --rc geninfo_unexecuted_blocks=1 00:07:56.466 00:07:56.466 ' 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.466 --rc genhtml_branch_coverage=1 00:07:56.466 --rc genhtml_function_coverage=1 00:07:56.466 --rc genhtml_legend=1 00:07:56.466 --rc geninfo_all_blocks=1 00:07:56.466 --rc geninfo_unexecuted_blocks=1 00:07:56.466 00:07:56.466 ' 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.466 --rc genhtml_branch_coverage=1 00:07:56.466 --rc genhtml_function_coverage=1 00:07:56.466 --rc genhtml_legend=1 00:07:56.466 --rc geninfo_all_blocks=1 00:07:56.466 --rc geninfo_unexecuted_blocks=1 00:07:56.466 00:07:56.466 ' 00:07:56.466 20:35:51 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.466 --rc genhtml_branch_coverage=1 00:07:56.466 --rc genhtml_function_coverage=1 00:07:56.466 --rc genhtml_legend=1 00:07:56.466 --rc geninfo_all_blocks=1 00:07:56.466 --rc geninfo_unexecuted_blocks=1 00:07:56.466 00:07:56.466 ' 00:07:56.466 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.466 20:35:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b667ac01-5336-4eb3-bd57-57a0d0e36562 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b667ac01-5336-4eb3-bd57-57a0d0e36562 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.467 20:35:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.467 20:35:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.467 20:35:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.467 20:35:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.467 20:35:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.467 20:35:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.467 20:35:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.467 20:35:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:56.467 20:35:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.467 20:35:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:56.467 INFO: launching applications... 00:07:56.467 20:35:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:56.467 Waiting for target to run... 00:07:56.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58822 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58822 /var/tmp/spdk_tgt.sock 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58822 ']' 00:07:56.467 20:35:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.467 20:35:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 [2024-11-26 20:35:51.583437] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:07:56.726 [2024-11-26 20:35:51.583644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58822 ] 00:07:57.293 [2024-11-26 20:35:52.186835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.551 [2024-11-26 20:35:52.332040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.485 00:07:58.485 INFO: shutting down applications... 00:07:58.485 20:35:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.485 20:35:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:58.485 20:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:58.485 20:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58822 ]] 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58822 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:07:58.485 20:35:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:59.053 20:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:59.053 20:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.053 20:35:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:07:59.053 20:35:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:59.312 20:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:59.312 20:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.312 20:35:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:07:59.312 20:35:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:59.880 20:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:59.880 20:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.880 20:35:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:07:59.880 20:35:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:00.449 20:35:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:00.449 20:35:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:00.449 20:35:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:08:00.449 20:35:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:01.018 20:35:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:01.018 20:35:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.018 20:35:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:08:01.018 20:35:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:01.584 20:35:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:01.584 20:35:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.584 20:35:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:08:01.584 20:35:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:01.842 SPDK target shutdown done 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58822 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:01.842 20:35:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:01.842 Success 00:08:01.842 20:35:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:01.842 ************************************ 00:08:01.842 END TEST json_config_extra_key 00:08:01.842 ************************************ 00:08:01.842 00:08:01.842 real 0m5.583s 00:08:01.842 user 0m4.736s 00:08:01.842 sys 0m0.896s 00:08:01.842 20:35:56 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.842 20:35:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 20:35:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:02.101 20:35:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.101 20:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.101 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:08:02.101 ************************************ 00:08:02.101 START TEST alias_rpc 00:08:02.101 ************************************ 00:08:02.101 20:35:56 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:02.101 * Looking for test storage... 00:08:02.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:02.101 20:35:56 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.101 20:35:56 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.101 20:35:56 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.101 20:35:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.101 --rc genhtml_branch_coverage=1 00:08:02.101 --rc genhtml_function_coverage=1 00:08:02.101 --rc genhtml_legend=1 00:08:02.101 --rc geninfo_all_blocks=1 00:08:02.101 --rc geninfo_unexecuted_blocks=1 00:08:02.101 00:08:02.101 ' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.101 --rc genhtml_branch_coverage=1 00:08:02.101 --rc genhtml_function_coverage=1 00:08:02.101 --rc genhtml_legend=1 00:08:02.101 --rc geninfo_all_blocks=1 00:08:02.101 --rc geninfo_unexecuted_blocks=1 00:08:02.101 00:08:02.101 ' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.101 --rc genhtml_branch_coverage=1 00:08:02.101 --rc genhtml_function_coverage=1 00:08:02.101 --rc genhtml_legend=1 00:08:02.101 --rc geninfo_all_blocks=1 00:08:02.101 --rc geninfo_unexecuted_blocks=1 00:08:02.101 00:08:02.101 ' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.101 --rc genhtml_branch_coverage=1 00:08:02.101 --rc genhtml_function_coverage=1 00:08:02.101 --rc genhtml_legend=1 00:08:02.101 --rc geninfo_all_blocks=1 00:08:02.101 --rc geninfo_unexecuted_blocks=1 00:08:02.101 00:08:02.101 ' 00:08:02.101 20:35:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:02.101 20:35:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58943 00:08:02.101 20:35:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58943 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58943 ']' 00:08:02.101 20:35:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.101 20:35:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.359 [2024-11-26 20:35:57.178264] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:02.359 [2024-11-26 20:35:57.178410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58943 ] 00:08:02.617 [2024-11-26 20:35:57.356755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.617 [2024-11-26 20:35:57.487715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.551 20:35:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.551 20:35:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:03.551 20:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:03.810 20:35:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58943 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58943 ']' 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58943 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58943 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.810 killing process with pid 58943 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58943' 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@973 -- # kill 58943 00:08:03.810 20:35:58 alias_rpc -- common/autotest_common.sh@978 -- # wait 58943 00:08:07.139 00:08:07.139 real 0m4.604s 00:08:07.139 user 0m4.704s 00:08:07.139 sys 0m0.671s 00:08:07.139 20:36:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.139 20:36:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.139 ************************************ 00:08:07.139 END TEST alias_rpc 00:08:07.139 ************************************ 00:08:07.139 20:36:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:07.139 20:36:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:07.139 20:36:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.139 20:36:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.139 20:36:01 -- common/autotest_common.sh@10 -- # set +x 00:08:07.139 ************************************ 00:08:07.139 START TEST spdkcli_tcp 00:08:07.139 ************************************ 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:07.139 * Looking for test storage... 00:08:07.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.139 20:36:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.139 --rc genhtml_branch_coverage=1 00:08:07.139 --rc genhtml_function_coverage=1 00:08:07.139 --rc genhtml_legend=1 00:08:07.139 --rc geninfo_all_blocks=1 00:08:07.139 --rc geninfo_unexecuted_blocks=1 00:08:07.139 00:08:07.139 ' 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.139 --rc genhtml_branch_coverage=1 00:08:07.139 --rc genhtml_function_coverage=1 00:08:07.139 --rc genhtml_legend=1 00:08:07.139 --rc geninfo_all_blocks=1 00:08:07.139 --rc geninfo_unexecuted_blocks=1 00:08:07.139 00:08:07.139 ' 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.139 --rc genhtml_branch_coverage=1 00:08:07.139 --rc genhtml_function_coverage=1 00:08:07.139 --rc genhtml_legend=1 00:08:07.139 --rc geninfo_all_blocks=1 00:08:07.139 --rc geninfo_unexecuted_blocks=1 00:08:07.139 00:08:07.139 ' 00:08:07.139 20:36:01 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.139 --rc genhtml_branch_coverage=1 00:08:07.139 --rc genhtml_function_coverage=1 00:08:07.139 --rc genhtml_legend=1 00:08:07.139 --rc geninfo_all_blocks=1 00:08:07.139 --rc geninfo_unexecuted_blocks=1 00:08:07.139 00:08:07.139 ' 00:08:07.139 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:07.139 20:36:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:07.139 20:36:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:07.139 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:07.139 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:07.140 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:07.140 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.140 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59061 00:08:07.140 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59061 00:08:07.140 20:36:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.140 20:36:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.140 [2024-11-26 20:36:01.908982] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:07.140 [2024-11-26 20:36:01.909396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:08:07.140 [2024-11-26 20:36:02.111167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.421 [2024-11-26 20:36:02.251330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.421 [2024-11-26 20:36:02.251337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.359 20:36:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.359 20:36:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:08.359 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59080 00:08:08.359 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:08.359 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:08.620 [ 00:08:08.620 "bdev_malloc_delete", 00:08:08.620 "bdev_malloc_create", 00:08:08.620 "bdev_null_resize", 00:08:08.620 "bdev_null_delete", 00:08:08.620 "bdev_null_create", 00:08:08.620 "bdev_nvme_cuse_unregister", 00:08:08.620 "bdev_nvme_cuse_register", 00:08:08.620 "bdev_opal_new_user", 00:08:08.620 "bdev_opal_set_lock_state", 00:08:08.620 "bdev_opal_delete", 00:08:08.620 "bdev_opal_get_info", 00:08:08.620 "bdev_opal_create", 00:08:08.620 "bdev_nvme_opal_revert", 00:08:08.620 "bdev_nvme_opal_init", 00:08:08.620 "bdev_nvme_send_cmd", 00:08:08.620 "bdev_nvme_set_keys", 00:08:08.620 "bdev_nvme_get_path_iostat", 00:08:08.620 "bdev_nvme_get_mdns_discovery_info", 00:08:08.620 "bdev_nvme_stop_mdns_discovery", 00:08:08.620 "bdev_nvme_start_mdns_discovery", 00:08:08.620 "bdev_nvme_set_multipath_policy", 00:08:08.620 "bdev_nvme_set_preferred_path", 00:08:08.620 "bdev_nvme_get_io_paths", 00:08:08.620 "bdev_nvme_remove_error_injection", 00:08:08.620 "bdev_nvme_add_error_injection", 00:08:08.620 "bdev_nvme_get_discovery_info", 00:08:08.620 "bdev_nvme_stop_discovery", 00:08:08.620 "bdev_nvme_start_discovery", 00:08:08.620 "bdev_nvme_get_controller_health_info", 00:08:08.620 "bdev_nvme_disable_controller", 00:08:08.620 "bdev_nvme_enable_controller", 00:08:08.620 "bdev_nvme_reset_controller", 00:08:08.620 "bdev_nvme_get_transport_statistics", 00:08:08.620 "bdev_nvme_apply_firmware", 00:08:08.620 "bdev_nvme_detach_controller", 00:08:08.620 "bdev_nvme_get_controllers", 00:08:08.620 "bdev_nvme_attach_controller", 00:08:08.620 "bdev_nvme_set_hotplug", 00:08:08.620 "bdev_nvme_set_options", 00:08:08.620 "bdev_passthru_delete", 00:08:08.620 "bdev_passthru_create", 00:08:08.620 "bdev_lvol_set_parent_bdev", 00:08:08.620 "bdev_lvol_set_parent", 00:08:08.620 "bdev_lvol_check_shallow_copy", 00:08:08.620 "bdev_lvol_start_shallow_copy", 00:08:08.620 "bdev_lvol_grow_lvstore", 00:08:08.620 "bdev_lvol_get_lvols", 00:08:08.620 "bdev_lvol_get_lvstores", 00:08:08.620 "bdev_lvol_delete", 00:08:08.620 "bdev_lvol_set_read_only", 00:08:08.620 "bdev_lvol_resize", 00:08:08.620 "bdev_lvol_decouple_parent", 00:08:08.620 "bdev_lvol_inflate", 00:08:08.620 "bdev_lvol_rename", 00:08:08.620 "bdev_lvol_clone_bdev", 00:08:08.620 "bdev_lvol_clone", 00:08:08.620 "bdev_lvol_snapshot", 00:08:08.620 "bdev_lvol_create", 00:08:08.620 "bdev_lvol_delete_lvstore", 00:08:08.620 "bdev_lvol_rename_lvstore", 00:08:08.620 "bdev_lvol_create_lvstore", 00:08:08.620 "bdev_raid_set_options", 00:08:08.620 "bdev_raid_remove_base_bdev", 00:08:08.620 "bdev_raid_add_base_bdev", 00:08:08.620 "bdev_raid_delete", 00:08:08.620 "bdev_raid_create", 00:08:08.620 "bdev_raid_get_bdevs", 00:08:08.620 "bdev_error_inject_error", 00:08:08.620 "bdev_error_delete", 00:08:08.620 "bdev_error_create", 00:08:08.620 "bdev_split_delete", 00:08:08.620 "bdev_split_create", 00:08:08.620 "bdev_delay_delete", 00:08:08.620 "bdev_delay_create", 00:08:08.620 "bdev_delay_update_latency", 00:08:08.620 "bdev_zone_block_delete", 00:08:08.620 "bdev_zone_block_create", 00:08:08.620 "blobfs_create", 00:08:08.620 "blobfs_detect", 00:08:08.620 "blobfs_set_cache_size", 00:08:08.620 "bdev_xnvme_delete", 00:08:08.620 "bdev_xnvme_create", 00:08:08.620 "bdev_aio_delete", 00:08:08.620 "bdev_aio_rescan", 00:08:08.620 "bdev_aio_create", 00:08:08.620 "bdev_ftl_set_property", 00:08:08.620 "bdev_ftl_get_properties", 00:08:08.620 "bdev_ftl_get_stats", 00:08:08.620 "bdev_ftl_unmap", 00:08:08.620 "bdev_ftl_unload", 00:08:08.620 "bdev_ftl_delete", 00:08:08.620 "bdev_ftl_load", 00:08:08.620 "bdev_ftl_create", 00:08:08.620 "bdev_virtio_attach_controller", 00:08:08.620 "bdev_virtio_scsi_get_devices", 00:08:08.620 "bdev_virtio_detach_controller", 00:08:08.620 "bdev_virtio_blk_set_hotplug", 00:08:08.620 "bdev_iscsi_delete", 00:08:08.620 "bdev_iscsi_create", 00:08:08.620 "bdev_iscsi_set_options", 00:08:08.620 "accel_error_inject_error", 00:08:08.620 "ioat_scan_accel_module", 00:08:08.620 "dsa_scan_accel_module", 00:08:08.620 "iaa_scan_accel_module", 00:08:08.620 "keyring_file_remove_key", 00:08:08.620 "keyring_file_add_key", 00:08:08.620 "keyring_linux_set_options", 00:08:08.620 "fsdev_aio_delete", 00:08:08.620 "fsdev_aio_create", 00:08:08.620 "iscsi_get_histogram", 00:08:08.620 "iscsi_enable_histogram", 00:08:08.620 "iscsi_set_options", 00:08:08.620 "iscsi_get_auth_groups", 00:08:08.620 "iscsi_auth_group_remove_secret", 00:08:08.620 "iscsi_auth_group_add_secret", 00:08:08.620 "iscsi_delete_auth_group", 00:08:08.620 "iscsi_create_auth_group", 00:08:08.620 "iscsi_set_discovery_auth", 00:08:08.620 "iscsi_get_options", 00:08:08.620 "iscsi_target_node_request_logout", 00:08:08.620 "iscsi_target_node_set_redirect", 00:08:08.620 "iscsi_target_node_set_auth", 00:08:08.620 "iscsi_target_node_add_lun", 00:08:08.620 "iscsi_get_stats", 00:08:08.620 "iscsi_get_connections", 00:08:08.620 "iscsi_portal_group_set_auth", 00:08:08.620 "iscsi_start_portal_group", 00:08:08.620 "iscsi_delete_portal_group", 00:08:08.620 "iscsi_create_portal_group", 00:08:08.620 "iscsi_get_portal_groups", 00:08:08.620 "iscsi_delete_target_node", 00:08:08.621 "iscsi_target_node_remove_pg_ig_maps", 00:08:08.621 "iscsi_target_node_add_pg_ig_maps", 00:08:08.621 "iscsi_create_target_node", 00:08:08.621 "iscsi_get_target_nodes", 00:08:08.621 "iscsi_delete_initiator_group", 00:08:08.621 "iscsi_initiator_group_remove_initiators", 00:08:08.621 "iscsi_initiator_group_add_initiators", 00:08:08.621 "iscsi_create_initiator_group", 00:08:08.621 "iscsi_get_initiator_groups", 00:08:08.621 "nvmf_set_crdt", 00:08:08.621 "nvmf_set_config", 00:08:08.621 "nvmf_set_max_subsystems", 00:08:08.621 "nvmf_stop_mdns_prr", 00:08:08.621 "nvmf_publish_mdns_prr", 00:08:08.621 "nvmf_subsystem_get_listeners", 00:08:08.621 "nvmf_subsystem_get_qpairs", 00:08:08.621 "nvmf_subsystem_get_controllers", 00:08:08.621 "nvmf_get_stats", 00:08:08.621 "nvmf_get_transports", 00:08:08.621 "nvmf_create_transport", 00:08:08.621 "nvmf_get_targets", 00:08:08.621 "nvmf_delete_target", 00:08:08.621 "nvmf_create_target", 00:08:08.621 "nvmf_subsystem_allow_any_host", 00:08:08.621 "nvmf_subsystem_set_keys", 00:08:08.621 "nvmf_subsystem_remove_host", 00:08:08.621 "nvmf_subsystem_add_host", 00:08:08.621 "nvmf_ns_remove_host", 00:08:08.621 "nvmf_ns_add_host", 00:08:08.621 "nvmf_subsystem_remove_ns", 00:08:08.621 "nvmf_subsystem_set_ns_ana_group", 00:08:08.621 "nvmf_subsystem_add_ns", 00:08:08.621 "nvmf_subsystem_listener_set_ana_state", 00:08:08.621 "nvmf_discovery_get_referrals", 00:08:08.621 "nvmf_discovery_remove_referral", 00:08:08.621 "nvmf_discovery_add_referral", 00:08:08.621 "nvmf_subsystem_remove_listener", 00:08:08.621 "nvmf_subsystem_add_listener", 00:08:08.621 "nvmf_delete_subsystem", 00:08:08.621 "nvmf_create_subsystem", 00:08:08.621 "nvmf_get_subsystems", 00:08:08.621 "env_dpdk_get_mem_stats", 00:08:08.621 "nbd_get_disks", 00:08:08.621 "nbd_stop_disk", 00:08:08.621 "nbd_start_disk", 00:08:08.621 "ublk_recover_disk", 00:08:08.621 "ublk_get_disks", 00:08:08.621 "ublk_stop_disk", 00:08:08.621 "ublk_start_disk", 00:08:08.621 "ublk_destroy_target", 00:08:08.621 "ublk_create_target", 00:08:08.621 "virtio_blk_create_transport", 00:08:08.621 "virtio_blk_get_transports", 00:08:08.621 "vhost_controller_set_coalescing", 00:08:08.621 "vhost_get_controllers", 00:08:08.621 "vhost_delete_controller", 00:08:08.621 "vhost_create_blk_controller", 00:08:08.621 "vhost_scsi_controller_remove_target", 00:08:08.621 "vhost_scsi_controller_add_target", 00:08:08.621 "vhost_start_scsi_controller", 00:08:08.621 "vhost_create_scsi_controller", 00:08:08.621 "thread_set_cpumask", 00:08:08.621 "scheduler_set_options", 00:08:08.621 "framework_get_governor", 00:08:08.621 "framework_get_scheduler", 00:08:08.621 "framework_set_scheduler", 00:08:08.621 "framework_get_reactors", 00:08:08.621 "thread_get_io_channels", 00:08:08.621 "thread_get_pollers", 00:08:08.621 "thread_get_stats", 00:08:08.621 "framework_monitor_context_switch", 00:08:08.621 "spdk_kill_instance", 00:08:08.621 "log_enable_timestamps", 00:08:08.621 "log_get_flags", 00:08:08.621 "log_clear_flag", 00:08:08.621 "log_set_flag", 00:08:08.621 "log_get_level", 00:08:08.621 "log_set_level", 00:08:08.621 "log_get_print_level", 00:08:08.621 "log_set_print_level", 00:08:08.621 "framework_enable_cpumask_locks", 00:08:08.621 "framework_disable_cpumask_locks", 00:08:08.621 "framework_wait_init", 00:08:08.621 "framework_start_init", 00:08:08.621 "scsi_get_devices", 00:08:08.621 "bdev_get_histogram", 00:08:08.621 "bdev_enable_histogram", 00:08:08.621 "bdev_set_qos_limit", 00:08:08.621 "bdev_set_qd_sampling_period", 00:08:08.621 "bdev_get_bdevs", 00:08:08.621 "bdev_reset_iostat", 00:08:08.621 "bdev_get_iostat", 00:08:08.621 "bdev_examine", 00:08:08.621 "bdev_wait_for_examine", 00:08:08.621 "bdev_set_options", 00:08:08.621 "accel_get_stats", 00:08:08.621 "accel_set_options", 00:08:08.621 "accel_set_driver", 00:08:08.621 "accel_crypto_key_destroy", 00:08:08.621 "accel_crypto_keys_get", 00:08:08.621 "accel_crypto_key_create", 00:08:08.621 "accel_assign_opc", 00:08:08.621 "accel_get_module_info", 00:08:08.621 "accel_get_opc_assignments", 00:08:08.621 "vmd_rescan", 00:08:08.621 "vmd_remove_device", 00:08:08.621 "vmd_enable", 00:08:08.621 "sock_get_default_impl", 00:08:08.621 "sock_set_default_impl", 00:08:08.621 "sock_impl_set_options", 00:08:08.621 "sock_impl_get_options", 00:08:08.621 "iobuf_get_stats", 00:08:08.621 "iobuf_set_options", 00:08:08.621 "keyring_get_keys", 00:08:08.621 "framework_get_pci_devices", 00:08:08.621 "framework_get_config", 00:08:08.621 "framework_get_subsystems", 00:08:08.621 "fsdev_set_opts", 00:08:08.621 "fsdev_get_opts", 00:08:08.621 "trace_get_info", 00:08:08.621 "trace_get_tpoint_group_mask", 00:08:08.621 "trace_disable_tpoint_group", 00:08:08.621 "trace_enable_tpoint_group", 00:08:08.621 "trace_clear_tpoint_mask", 00:08:08.621 "trace_set_tpoint_mask", 00:08:08.621 "notify_get_notifications", 00:08:08.621 "notify_get_types", 00:08:08.621 "spdk_get_version", 00:08:08.621 "rpc_get_methods" 00:08:08.621 ] 00:08:08.621 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.621 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:08.621 20:36:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59061 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59061 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.621 20:36:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:08:08.881 killing process with pid 59061 00:08:08.881 20:36:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.881 20:36:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.881 20:36:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:08:08.881 20:36:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59061 00:08:08.881 20:36:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59061 00:08:11.442 ************************************ 00:08:11.442 END TEST spdkcli_tcp 00:08:11.442 ************************************ 00:08:11.442 00:08:11.442 real 0m4.770s 00:08:11.442 user 0m8.656s 00:08:11.442 sys 0m0.727s 00:08:11.442 20:36:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.442 20:36:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.442 20:36:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:11.442 20:36:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.442 20:36:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.442 20:36:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.442 ************************************ 00:08:11.442 START TEST dpdk_mem_utility 00:08:11.442 ************************************ 00:08:11.442 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:11.701 * Looking for test storage... 00:08:11.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:11.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.701 20:36:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.701 --rc genhtml_branch_coverage=1 00:08:11.701 --rc genhtml_function_coverage=1 00:08:11.701 --rc genhtml_legend=1 00:08:11.701 --rc geninfo_all_blocks=1 00:08:11.701 --rc geninfo_unexecuted_blocks=1 00:08:11.701 00:08:11.701 ' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.701 --rc genhtml_branch_coverage=1 00:08:11.701 --rc genhtml_function_coverage=1 00:08:11.701 --rc genhtml_legend=1 00:08:11.701 --rc geninfo_all_blocks=1 00:08:11.701 --rc geninfo_unexecuted_blocks=1 00:08:11.701 00:08:11.701 ' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.701 --rc genhtml_branch_coverage=1 00:08:11.701 --rc genhtml_function_coverage=1 00:08:11.701 --rc genhtml_legend=1 00:08:11.701 --rc geninfo_all_blocks=1 00:08:11.701 --rc geninfo_unexecuted_blocks=1 00:08:11.701 00:08:11.701 ' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.701 --rc genhtml_branch_coverage=1 00:08:11.701 --rc genhtml_function_coverage=1 00:08:11.701 --rc genhtml_legend=1 00:08:11.701 --rc geninfo_all_blocks=1 00:08:11.701 --rc geninfo_unexecuted_blocks=1 00:08:11.701 00:08:11.701 ' 00:08:11.701 20:36:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:11.701 20:36:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59191 00:08:11.701 20:36:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.701 20:36:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59191 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59191 ']' 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.701 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.702 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.702 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.702 20:36:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:11.702 [2024-11-26 20:36:06.650794] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:11.702 [2024-11-26 20:36:06.651136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:08:11.961 [2024-11-26 20:36:06.844776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.220 [2024-11-26 20:36:07.021260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.209 20:36:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.209 20:36:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:13.209 20:36:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:13.209 20:36:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:13.209 20:36:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.209 20:36:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:13.209 { 00:08:13.209 "filename": "/tmp/spdk_mem_dump.txt" 00:08:13.209 } 00:08:13.209 20:36:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.209 20:36:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:13.209 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:13.209 1 heaps totaling size 824.000000 MiB 00:08:13.209 size: 824.000000 MiB heap id: 0 00:08:13.209 end heaps---------- 00:08:13.209 9 mempools totaling size 603.782043 MiB 00:08:13.209 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:13.209 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:13.209 size: 100.555481 MiB name: bdev_io_59191 00:08:13.209 size: 50.003479 MiB name: msgpool_59191 00:08:13.209 size: 36.509338 MiB name: fsdev_io_59191 00:08:13.209 size: 21.763794 MiB name: PDU_Pool 00:08:13.209 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:13.209 size: 4.133484 MiB name: evtpool_59191 00:08:13.209 size: 0.026123 MiB name: Session_Pool 00:08:13.209 end mempools------- 00:08:13.209 6 memzones totaling size 4.142822 MiB 00:08:13.209 size: 1.000366 MiB name: RG_ring_0_59191 00:08:13.209 size: 1.000366 MiB name: RG_ring_1_59191 00:08:13.209 size: 1.000366 MiB name: RG_ring_4_59191 00:08:13.209 size: 1.000366 MiB name: RG_ring_5_59191 00:08:13.209 size: 0.125366 MiB name: RG_ring_2_59191 00:08:13.209 size: 0.015991 MiB name: RG_ring_3_59191 00:08:13.209 end memzones------- 00:08:13.209 20:36:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:13.209 heap id: 0 total size: 824.000000 MiB number of busy elements: 318 number of free elements: 18 00:08:13.209 list of free elements. size: 16.780640 MiB 00:08:13.209 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:13.209 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:13.209 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:13.209 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:13.209 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:13.209 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:13.209 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:13.209 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:13.209 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:13.209 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:13.209 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:13.209 element at address: 0x20001b400000 with size: 0.562195 MiB 00:08:13.209 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:13.209 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:13.209 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:13.209 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:13.209 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:13.209 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:13.209 list of standard malloc elements. size: 199.288452 MiB 00:08:13.209 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:13.209 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:13.209 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:13.209 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:13.209 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:13.209 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:13.209 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:13.209 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:13.209 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:13.209 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:13.209 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:13.209 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:13.209 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:13.210 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:13.211 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:13.211 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:13.211 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:13.212 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:13.212 list of memzone associated elements. size: 607.930908 MiB 00:08:13.212 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:13.212 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:13.212 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:13.212 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:13.212 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:13.212 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59191_0 00:08:13.212 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:13.212 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59191_0 00:08:13.212 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:13.212 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59191_0 00:08:13.212 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:13.212 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:13.212 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:13.212 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:13.212 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:13.212 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59191_0 00:08:13.212 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:13.212 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59191 00:08:13.212 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:13.212 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59191 00:08:13.212 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:13.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:13.212 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:13.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:13.212 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:13.212 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:13.212 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:13.212 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:13.212 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:13.212 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59191 00:08:13.212 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:13.212 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59191 00:08:13.212 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:13.212 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59191 00:08:13.212 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:13.212 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59191 00:08:13.212 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:13.212 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59191 00:08:13.212 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:13.212 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59191 00:08:13.212 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:13.212 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:13.212 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:13.212 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:13.212 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:13.212 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:13.212 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:13.212 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59191 00:08:13.212 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:13.212 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59191 00:08:13.212 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:13.212 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:13.212 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:13.212 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:13.212 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:13.212 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59191 00:08:13.212 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:13.212 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:13.212 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:13.212 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59191 00:08:13.212 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:13.212 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59191 00:08:13.212 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:13.212 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59191 00:08:13.212 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:13.212 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:13.212 20:36:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:13.212 20:36:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59191 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59191 ']' 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59191 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59191 00:08:13.212 killing process with pid 59191 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59191' 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59191 00:08:13.212 20:36:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59191 00:08:16.500 00:08:16.500 real 0m4.470s 00:08:16.500 user 0m4.471s 00:08:16.500 sys 0m0.621s 00:08:16.500 ************************************ 00:08:16.500 END TEST dpdk_mem_utility 00:08:16.500 ************************************ 00:08:16.500 20:36:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.500 20:36:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:16.500 20:36:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:16.500 20:36:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.500 20:36:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.500 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:08:16.500 ************************************ 00:08:16.500 START TEST event 00:08:16.500 ************************************ 00:08:16.500 20:36:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:16.500 * Looking for test storage... 00:08:16.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:16.500 20:36:10 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.500 20:36:10 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.500 20:36:10 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.500 20:36:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.500 20:36:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.500 20:36:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.500 20:36:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.500 20:36:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.500 20:36:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.500 20:36:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.500 20:36:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.500 20:36:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.500 20:36:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.500 20:36:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.500 20:36:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.500 20:36:11 event -- scripts/common.sh@344 -- # case "$op" in 00:08:16.500 20:36:11 event -- scripts/common.sh@345 -- # : 1 00:08:16.500 20:36:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.500 20:36:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.500 20:36:11 event -- scripts/common.sh@365 -- # decimal 1 00:08:16.500 20:36:11 event -- scripts/common.sh@353 -- # local d=1 00:08:16.500 20:36:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.500 20:36:11 event -- scripts/common.sh@355 -- # echo 1 00:08:16.500 20:36:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.500 20:36:11 event -- scripts/common.sh@366 -- # decimal 2 00:08:16.500 20:36:11 event -- scripts/common.sh@353 -- # local d=2 00:08:16.500 20:36:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.500 20:36:11 event -- scripts/common.sh@355 -- # echo 2 00:08:16.500 20:36:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.500 20:36:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.500 20:36:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.500 20:36:11 event -- scripts/common.sh@368 -- # return 0 00:08:16.500 20:36:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.500 20:36:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.500 --rc genhtml_branch_coverage=1 00:08:16.500 --rc genhtml_function_coverage=1 00:08:16.500 --rc genhtml_legend=1 00:08:16.500 --rc geninfo_all_blocks=1 00:08:16.500 --rc geninfo_unexecuted_blocks=1 00:08:16.500 00:08:16.500 ' 00:08:16.500 20:36:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.500 --rc genhtml_branch_coverage=1 00:08:16.500 --rc genhtml_function_coverage=1 00:08:16.500 --rc genhtml_legend=1 00:08:16.500 --rc geninfo_all_blocks=1 00:08:16.500 --rc geninfo_unexecuted_blocks=1 00:08:16.500 00:08:16.501 ' 00:08:16.501 20:36:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.501 --rc genhtml_branch_coverage=1 00:08:16.501 --rc genhtml_function_coverage=1 00:08:16.501 --rc genhtml_legend=1 00:08:16.501 --rc geninfo_all_blocks=1 00:08:16.501 --rc geninfo_unexecuted_blocks=1 00:08:16.501 00:08:16.501 ' 00:08:16.501 20:36:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.501 --rc genhtml_branch_coverage=1 00:08:16.501 --rc genhtml_function_coverage=1 00:08:16.501 --rc genhtml_legend=1 00:08:16.501 --rc geninfo_all_blocks=1 00:08:16.501 --rc geninfo_unexecuted_blocks=1 00:08:16.501 00:08:16.501 ' 00:08:16.501 20:36:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:16.501 20:36:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:16.501 20:36:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:16.501 20:36:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:16.501 20:36:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.501 20:36:11 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.501 ************************************ 00:08:16.501 START TEST event_perf 00:08:16.501 ************************************ 00:08:16.501 20:36:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:16.501 Running I/O for 1 seconds...[2024-11-26 20:36:11.164573] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:16.501 [2024-11-26 20:36:11.165017] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:08:16.501 [2024-11-26 20:36:11.362481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.759 [2024-11-26 20:36:11.507597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.759 [2024-11-26 20:36:11.507709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.759 [2024-11-26 20:36:11.507929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.759 Running I/O for 1 seconds...[2024-11-26 20:36:11.509223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.135 00:08:18.135 lcore 0: 94920 00:08:18.135 lcore 1: 94921 00:08:18.135 lcore 2: 94922 00:08:18.135 lcore 3: 94925 00:08:18.135 done. 00:08:18.135 00:08:18.135 real 0m1.675s 00:08:18.135 user 0m4.372s 00:08:18.135 sys 0m0.159s 00:08:18.135 ************************************ 00:08:18.135 END TEST event_perf 00:08:18.135 ************************************ 00:08:18.135 20:36:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.135 20:36:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.135 20:36:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:18.135 20:36:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:18.135 20:36:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.135 20:36:12 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.135 ************************************ 00:08:18.135 START TEST event_reactor 00:08:18.135 ************************************ 00:08:18.135 20:36:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:18.135 [2024-11-26 20:36:12.897487] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:18.135 [2024-11-26 20:36:12.897814] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59344 ] 00:08:18.135 [2024-11-26 20:36:13.096819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.430 [2024-11-26 20:36:13.262679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.809 test_start 00:08:19.809 oneshot 00:08:19.809 tick 100 00:08:19.809 tick 100 00:08:19.809 tick 250 00:08:19.809 tick 100 00:08:19.809 tick 100 00:08:19.809 tick 100 00:08:19.809 tick 250 00:08:19.809 tick 500 00:08:19.809 tick 100 00:08:19.809 tick 100 00:08:19.809 tick 250 00:08:19.809 tick 100 00:08:19.809 tick 100 00:08:19.809 test_end 00:08:19.809 00:08:19.809 real 0m1.665s 00:08:19.809 user 0m1.417s 00:08:19.809 sys 0m0.134s 00:08:19.809 ************************************ 00:08:19.809 END TEST event_reactor 00:08:19.809 ************************************ 00:08:19.809 20:36:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.809 20:36:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:19.809 20:36:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:19.809 20:36:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:19.809 20:36:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.809 20:36:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:19.809 ************************************ 00:08:19.809 START TEST event_reactor_perf 00:08:19.809 ************************************ 00:08:19.809 20:36:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:19.809 [2024-11-26 20:36:14.618931] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:19.809 [2024-11-26 20:36:14.619105] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:08:19.809 [2024-11-26 20:36:14.793532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.067 [2024-11-26 20:36:14.917738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.442 test_start 00:08:21.442 test_end 00:08:21.442 Performance: 341964 events per second 00:08:21.442 ************************************ 00:08:21.442 END TEST event_reactor_perf 00:08:21.442 ************************************ 00:08:21.442 00:08:21.442 real 0m1.628s 00:08:21.442 user 0m1.411s 00:08:21.442 sys 0m0.107s 00:08:21.442 20:36:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.442 20:36:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:21.442 20:36:16 event -- event/event.sh@49 -- # uname -s 00:08:21.442 20:36:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:21.442 20:36:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:21.442 20:36:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.442 20:36:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.442 20:36:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:21.442 ************************************ 00:08:21.442 START TEST event_scheduler 00:08:21.442 ************************************ 00:08:21.442 20:36:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:21.442 * Looking for test storage... 00:08:21.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:21.442 20:36:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.442 20:36:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.442 20:36:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.701 20:36:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.701 --rc genhtml_branch_coverage=1 00:08:21.701 --rc genhtml_function_coverage=1 00:08:21.701 --rc genhtml_legend=1 00:08:21.701 --rc geninfo_all_blocks=1 00:08:21.701 --rc geninfo_unexecuted_blocks=1 00:08:21.701 00:08:21.701 ' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.701 --rc genhtml_branch_coverage=1 00:08:21.701 --rc genhtml_function_coverage=1 00:08:21.701 --rc genhtml_legend=1 00:08:21.701 --rc geninfo_all_blocks=1 00:08:21.701 --rc geninfo_unexecuted_blocks=1 00:08:21.701 00:08:21.701 ' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.701 --rc genhtml_branch_coverage=1 00:08:21.701 --rc genhtml_function_coverage=1 00:08:21.701 --rc genhtml_legend=1 00:08:21.701 --rc geninfo_all_blocks=1 00:08:21.701 --rc geninfo_unexecuted_blocks=1 00:08:21.701 00:08:21.701 ' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.701 --rc genhtml_branch_coverage=1 00:08:21.701 --rc genhtml_function_coverage=1 00:08:21.701 --rc genhtml_legend=1 00:08:21.701 --rc geninfo_all_blocks=1 00:08:21.701 --rc geninfo_unexecuted_blocks=1 00:08:21.701 00:08:21.701 ' 00:08:21.701 20:36:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:21.701 20:36:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59456 00:08:21.701 20:36:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:21.701 20:36:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.701 20:36:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59456 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59456 ']' 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.701 20:36:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:21.701 [2024-11-26 20:36:16.603376] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:21.701 [2024-11-26 20:36:16.603801] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59456 ] 00:08:21.959 [2024-11-26 20:36:16.811166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.217 [2024-11-26 20:36:16.987510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.217 [2024-11-26 20:36:16.987595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.217 [2024-11-26 20:36:16.987759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.217 [2024-11-26 20:36:16.987773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:22.785 20:36:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:22.785 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:22.785 POWER: Cannot set governor of lcore 0 to userspace 00:08:22.785 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:22.785 POWER: Cannot set governor of lcore 0 to performance 00:08:22.785 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:22.785 POWER: Cannot set governor of lcore 0 to userspace 00:08:22.785 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:22.785 POWER: Cannot set governor of lcore 0 to userspace 00:08:22.785 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:22.785 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:22.785 POWER: Unable to set Power Management Environment for lcore 0 00:08:22.785 [2024-11-26 20:36:17.590370] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:22.785 [2024-11-26 20:36:17.590413] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:22.785 [2024-11-26 20:36:17.590428] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:22.785 [2024-11-26 20:36:17.590465] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:22.785 [2024-11-26 20:36:17.590480] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:22.785 [2024-11-26 20:36:17.590494] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.785 20:36:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.785 20:36:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 [2024-11-26 20:36:17.945561] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:23.043 20:36:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:23.043 20:36:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.043 20:36:17 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 ************************************ 00:08:23.043 START TEST scheduler_create_thread 00:08:23.043 ************************************ 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 2 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 3 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 4 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 5 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 6 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 7 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.043 8 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.043 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 9 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 10 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.560 20:36:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:23.560 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.560 20:36:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.471 20:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.471 20:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:25.471 20:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:25.471 20:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.471 20:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.404 20:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.404 00:08:26.404 real 0m3.099s 00:08:26.404 user 0m0.021s 00:08:26.404 sys 0m0.006s 00:08:26.404 ************************************ 00:08:26.404 END TEST scheduler_create_thread 00:08:26.404 ************************************ 00:08:26.404 20:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.404 20:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.404 20:36:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:26.404 20:36:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59456 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59456 ']' 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59456 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59456 00:08:26.404 killing process with pid 59456 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59456' 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59456 00:08:26.404 20:36:21 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59456 00:08:26.662 [2024-11-26 20:36:21.438279] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:28.075 ************************************ 00:08:28.075 END TEST event_scheduler 00:08:28.075 00:08:28.075 real 0m6.598s 00:08:28.075 user 0m13.438s 00:08:28.075 sys 0m0.588s 00:08:28.075 20:36:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.075 20:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:28.075 ************************************ 00:08:28.075 20:36:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:28.075 20:36:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:28.075 20:36:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.075 20:36:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.075 20:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:28.075 ************************************ 00:08:28.075 START TEST app_repeat 00:08:28.075 ************************************ 00:08:28.075 20:36:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:28.075 20:36:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59581 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.076 Process app_repeat pid: 59581 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59581' 00:08:28.076 spdk_app_start Round 0 00:08:28.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:28.076 20:36:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59581 /var/tmp/spdk-nbd.sock 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59581 ']' 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.076 20:36:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:28.076 [2024-11-26 20:36:23.015361] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:28.076 [2024-11-26 20:36:23.015828] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59581 ] 00:08:28.334 [2024-11-26 20:36:23.222248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.592 [2024-11-26 20:36:23.398323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.592 [2024-11-26 20:36:23.398347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.159 20:36:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.159 20:36:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:29.159 20:36:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.727 Malloc0 00:08:29.727 20:36:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:29.985 Malloc1 00:08:30.243 20:36:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.243 20:36:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:30.502 /dev/nbd0 00:08:30.502 20:36:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:30.502 20:36:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.502 1+0 records in 00:08:30.502 1+0 records out 00:08:30.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437793 s, 9.4 MB/s 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:30.502 20:36:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:30.502 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.502 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.502 20:36:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:30.760 /dev/nbd1 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.760 1+0 records in 00:08:30.760 1+0 records out 00:08:30.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362796 s, 11.3 MB/s 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:30.760 20:36:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.760 20:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.327 { 00:08:31.327 "nbd_device": "/dev/nbd0", 00:08:31.327 "bdev_name": "Malloc0" 00:08:31.327 }, 00:08:31.327 { 00:08:31.327 "nbd_device": "/dev/nbd1", 00:08:31.327 "bdev_name": "Malloc1" 00:08:31.327 } 00:08:31.327 ]' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.327 { 00:08:31.327 "nbd_device": "/dev/nbd0", 00:08:31.327 "bdev_name": "Malloc0" 00:08:31.327 }, 00:08:31.327 { 00:08:31.327 "nbd_device": "/dev/nbd1", 00:08:31.327 "bdev_name": "Malloc1" 00:08:31.327 } 00:08:31.327 ]' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.327 /dev/nbd1' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.327 /dev/nbd1' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.327 20:36:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.328 256+0 records in 00:08:31.328 256+0 records out 00:08:31.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609565 s, 172 MB/s 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.328 256+0 records in 00:08:31.328 256+0 records out 00:08:31.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273572 s, 38.3 MB/s 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.328 256+0 records in 00:08:31.328 256+0 records out 00:08:31.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0410445 s, 25.5 MB/s 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.328 20:36:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.894 20:36:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.151 20:36:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.152 20:36:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.152 20:36:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.152 20:36:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.152 20:36:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.152 20:36:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:32.409 20:36:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:32.409 20:36:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:32.992 20:36:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:34.369 [2024-11-26 20:36:29.361803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.628 [2024-11-26 20:36:29.487716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.628 [2024-11-26 20:36:29.487716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.887 [2024-11-26 20:36:29.708951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:34.887 [2024-11-26 20:36:29.709049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.324 spdk_app_start Round 1 00:08:36.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.324 20:36:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:36.324 20:36:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:36.324 20:36:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59581 /var/tmp/spdk-nbd.sock 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59581 ']' 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.324 20:36:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.583 20:36:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.583 20:36:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:36.583 20:36:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.843 Malloc0 00:08:36.843 20:36:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.101 Malloc1 00:08:37.101 20:36:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.101 20:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.667 /dev/nbd0 00:08:37.667 20:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.667 20:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.667 1+0 records in 00:08:37.667 1+0 records out 00:08:37.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368984 s, 11.1 MB/s 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.667 20:36:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:37.667 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.667 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.667 20:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:37.925 /dev/nbd1 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.925 1+0 records in 00:08:37.925 1+0 records out 00:08:37.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552483 s, 7.4 MB/s 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.925 20:36:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.925 20:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.520 { 00:08:38.520 "nbd_device": "/dev/nbd0", 00:08:38.520 "bdev_name": "Malloc0" 00:08:38.520 }, 00:08:38.520 { 00:08:38.520 "nbd_device": "/dev/nbd1", 00:08:38.520 "bdev_name": "Malloc1" 00:08:38.520 } 00:08:38.520 ]' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.520 { 00:08:38.520 "nbd_device": "/dev/nbd0", 00:08:38.520 "bdev_name": "Malloc0" 00:08:38.520 }, 00:08:38.520 { 00:08:38.520 "nbd_device": "/dev/nbd1", 00:08:38.520 "bdev_name": "Malloc1" 00:08:38.520 } 00:08:38.520 ]' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.520 /dev/nbd1' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.520 /dev/nbd1' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.520 256+0 records in 00:08:38.520 256+0 records out 00:08:38.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00765819 s, 137 MB/s 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.520 20:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.520 256+0 records in 00:08:38.520 256+0 records out 00:08:38.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0345297 s, 30.4 MB/s 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.521 256+0 records in 00:08:38.521 256+0 records out 00:08:38.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0427311 s, 24.5 MB/s 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.521 20:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.087 20:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.345 20:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.604 20:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:39.863 20:36:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:39.863 20:36:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:40.432 20:36:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:41.808 [2024-11-26 20:36:36.549647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.808 [2024-11-26 20:36:36.684137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.808 [2024-11-26 20:36:36.684161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.066 [2024-11-26 20:36:36.912041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:42.066 [2024-11-26 20:36:36.912159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:43.443 spdk_app_start Round 2 00:08:43.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.443 20:36:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:43.443 20:36:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:43.443 20:36:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59581 /var/tmp/spdk-nbd.sock 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59581 ']' 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.443 20:36:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:43.701 20:36:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.702 20:36:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:43.702 20:36:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.960 Malloc0 00:08:43.960 20:36:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.219 Malloc1 00:08:44.219 20:36:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.219 20:36:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.477 /dev/nbd0 00:08:44.735 20:36:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.735 20:36:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.735 1+0 records in 00:08:44.735 1+0 records out 00:08:44.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411028 s, 10.0 MB/s 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.735 20:36:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.735 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.735 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.735 20:36:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:44.994 /dev/nbd1 00:08:44.994 20:36:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:44.994 20:36:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.994 1+0 records in 00:08:44.994 1+0 records out 00:08:44.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471895 s, 8.7 MB/s 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.994 20:36:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.994 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.994 20:36:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.995 20:36:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.995 20:36:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.995 20:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.254 20:36:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.254 { 00:08:45.254 "nbd_device": "/dev/nbd0", 00:08:45.254 "bdev_name": "Malloc0" 00:08:45.254 }, 00:08:45.254 { 00:08:45.254 "nbd_device": "/dev/nbd1", 00:08:45.254 "bdev_name": "Malloc1" 00:08:45.254 } 00:08:45.254 ]' 00:08:45.254 20:36:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.254 { 00:08:45.254 "nbd_device": "/dev/nbd0", 00:08:45.254 "bdev_name": "Malloc0" 00:08:45.254 }, 00:08:45.254 { 00:08:45.254 "nbd_device": "/dev/nbd1", 00:08:45.254 "bdev_name": "Malloc1" 00:08:45.254 } 00:08:45.254 ]' 00:08:45.254 20:36:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:45.513 /dev/nbd1' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:45.513 /dev/nbd1' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:45.513 256+0 records in 00:08:45.513 256+0 records out 00:08:45.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757321 s, 138 MB/s 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:45.513 256+0 records in 00:08:45.513 256+0 records out 00:08:45.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306835 s, 34.2 MB/s 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.513 256+0 records in 00:08:45.513 256+0 records out 00:08:45.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0405571 s, 25.9 MB/s 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.513 20:36:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.772 20:36:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.031 20:36:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.289 20:36:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:46.289 20:36:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.289 20:36:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.289 20:36:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.289 20:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.547 20:36:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.547 20:36:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:47.115 20:36:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:48.490 [2024-11-26 20:36:43.377579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.748 [2024-11-26 20:36:43.506507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.748 [2024-11-26 20:36:43.506510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.748 [2024-11-26 20:36:43.723198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:48.748 [2024-11-26 20:36:43.723313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:50.125 20:36:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59581 /var/tmp/spdk-nbd.sock 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59581 ']' 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.125 20:36:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:50.384 20:36:45 event.app_repeat -- event/event.sh@39 -- # killprocess 59581 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59581 ']' 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59581 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59581 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.384 killing process with pid 59581 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59581' 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59581 00:08:50.384 20:36:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59581 00:08:51.792 spdk_app_start is called in Round 0. 00:08:51.792 Shutdown signal received, stop current app iteration 00:08:51.792 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:08:51.792 spdk_app_start is called in Round 1. 00:08:51.792 Shutdown signal received, stop current app iteration 00:08:51.792 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:08:51.792 spdk_app_start is called in Round 2. 00:08:51.792 Shutdown signal received, stop current app iteration 00:08:51.792 Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 reinitialization... 00:08:51.792 spdk_app_start is called in Round 3. 00:08:51.792 Shutdown signal received, stop current app iteration 00:08:51.792 20:36:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:51.792 20:36:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:51.792 00:08:51.792 real 0m23.799s 00:08:51.792 user 0m52.228s 00:08:51.792 sys 0m4.132s 00:08:51.792 ************************************ 00:08:51.792 END TEST app_repeat 00:08:51.792 ************************************ 00:08:51.792 20:36:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.792 20:36:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:51.792 20:36:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:51.792 20:36:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:51.792 20:36:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.792 20:36:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.792 20:36:46 event -- common/autotest_common.sh@10 -- # set +x 00:08:52.052 ************************************ 00:08:52.052 START TEST cpu_locks 00:08:52.052 ************************************ 00:08:52.052 20:36:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:52.052 * Looking for test storage... 00:08:52.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:52.052 20:36:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.052 20:36:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.052 20:36:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.052 20:36:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:52.052 20:36:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.053 20:36:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.053 --rc genhtml_branch_coverage=1 00:08:52.053 --rc genhtml_function_coverage=1 00:08:52.053 --rc genhtml_legend=1 00:08:52.053 --rc geninfo_all_blocks=1 00:08:52.053 --rc geninfo_unexecuted_blocks=1 00:08:52.053 00:08:52.053 ' 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.053 --rc genhtml_branch_coverage=1 00:08:52.053 --rc genhtml_function_coverage=1 00:08:52.053 --rc genhtml_legend=1 00:08:52.053 --rc geninfo_all_blocks=1 00:08:52.053 --rc geninfo_unexecuted_blocks=1 00:08:52.053 00:08:52.053 ' 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.053 --rc genhtml_branch_coverage=1 00:08:52.053 --rc genhtml_function_coverage=1 00:08:52.053 --rc genhtml_legend=1 00:08:52.053 --rc geninfo_all_blocks=1 00:08:52.053 --rc geninfo_unexecuted_blocks=1 00:08:52.053 00:08:52.053 ' 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.053 --rc genhtml_branch_coverage=1 00:08:52.053 --rc genhtml_function_coverage=1 00:08:52.053 --rc genhtml_legend=1 00:08:52.053 --rc geninfo_all_blocks=1 00:08:52.053 --rc geninfo_unexecuted_blocks=1 00:08:52.053 00:08:52.053 ' 00:08:52.053 20:36:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:52.053 20:36:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:52.053 20:36:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:52.053 20:36:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.053 20:36:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.053 ************************************ 00:08:52.053 START TEST default_locks 00:08:52.053 ************************************ 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60077 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60077 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60077 ']' 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.053 20:36:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.312 [2024-11-26 20:36:47.127108] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:52.312 [2024-11-26 20:36:47.127306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60077 ] 00:08:52.570 [2024-11-26 20:36:47.314067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.570 [2024-11-26 20:36:47.519876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.946 20:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.946 20:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:53.946 20:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60077 00:08:53.947 20:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60077 00:08:53.947 20:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60077 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60077 ']' 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60077 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60077 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.512 killing process with pid 60077 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60077' 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60077 00:08:54.512 20:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60077 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60077 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60077 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60077 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60077 ']' 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.799 ERROR: process (pid: 60077) is no longer running 00:08:57.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60077) - No such process 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:57.799 00:08:57.799 real 0m5.722s 00:08:57.799 user 0m5.662s 00:08:57.799 sys 0m0.961s 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.799 ************************************ 00:08:57.799 END TEST default_locks 00:08:57.799 ************************************ 00:08:57.799 20:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.799 20:36:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:57.799 20:36:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.799 20:36:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.799 20:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.799 ************************************ 00:08:57.799 START TEST default_locks_via_rpc 00:08:57.799 ************************************ 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60169 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60169 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60169 ']' 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.799 20:36:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.059 [2024-11-26 20:36:52.964029] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:08:58.059 [2024-11-26 20:36:52.964253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60169 ] 00:08:58.317 [2024-11-26 20:36:53.166645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.575 [2024-11-26 20:36:53.360215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60169 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60169 00:08:59.949 20:36:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60169 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60169 ']' 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60169 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60169 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.207 killing process with pid 60169 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60169' 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60169 00:09:00.207 20:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60169 00:09:03.542 00:09:03.542 real 0m5.255s 00:09:03.542 user 0m5.080s 00:09:03.542 sys 0m1.010s 00:09:03.542 20:36:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.542 20:36:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.542 ************************************ 00:09:03.542 END TEST default_locks_via_rpc 00:09:03.542 ************************************ 00:09:03.542 20:36:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:03.542 20:36:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.542 20:36:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.542 20:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:03.542 ************************************ 00:09:03.542 START TEST non_locking_app_on_locked_coremask 00:09:03.542 ************************************ 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60254 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60254 /var/tmp/spdk.sock 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60254 ']' 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.542 20:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:03.542 [2024-11-26 20:36:58.193920] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:03.542 [2024-11-26 20:36:58.194069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60254 ] 00:09:03.542 [2024-11-26 20:36:58.380870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.859 [2024-11-26 20:36:58.536714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60275 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60275 /var/tmp/spdk2.sock 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60275 ']' 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.810 20:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.810 [2024-11-26 20:36:59.702370] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:04.810 [2024-11-26 20:36:59.703141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:09:05.069 [2024-11-26 20:36:59.930940] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:05.069 [2024-11-26 20:36:59.931045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.327 [2024-11-26 20:37:00.215569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.857 20:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.857 20:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:07.857 20:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60254 00:09:07.857 20:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60254 00:09:07.857 20:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:08.788 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60254 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60254 ']' 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60254 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60254 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.789 killing process with pid 60254 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60254' 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60254 00:09:08.789 20:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60254 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60275 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60275 ']' 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60275 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60275 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.350 killing process with pid 60275 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60275' 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60275 00:09:15.350 20:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60275 00:09:17.288 00:09:17.288 real 0m14.133s 00:09:17.288 user 0m14.875s 00:09:17.288 sys 0m1.672s 00:09:17.288 20:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.288 20:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:17.288 ************************************ 00:09:17.288 END TEST non_locking_app_on_locked_coremask 00:09:17.288 ************************************ 00:09:17.288 20:37:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:17.288 20:37:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.288 20:37:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.288 20:37:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.546 ************************************ 00:09:17.546 START TEST locking_app_on_unlocked_coremask 00:09:17.546 ************************************ 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60451 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60451 /var/tmp/spdk.sock 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60451 ']' 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.546 20:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:17.546 [2024-11-26 20:37:12.450265] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:17.546 [2024-11-26 20:37:12.450512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60451 ] 00:09:17.804 [2024-11-26 20:37:12.674474] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:17.804 [2024-11-26 20:37:12.674563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.062 [2024-11-26 20:37:12.868550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60473 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60473 /var/tmp/spdk2.sock 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60473 ']' 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.436 20:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.436 [2024-11-26 20:37:14.298789] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:19.436 [2024-11-26 20:37:14.298993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60473 ] 00:09:19.696 [2024-11-26 20:37:14.522056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.955 [2024-11-26 20:37:14.807007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.483 20:37:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.483 20:37:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:22.483 20:37:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60473 00:09:22.483 20:37:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.483 20:37:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60473 00:09:23.419 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60451 00:09:23.419 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60451 ']' 00:09:23.419 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60451 00:09:23.419 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:23.419 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60451 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.678 killing process with pid 60451 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60451' 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60451 00:09:23.678 20:37:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60451 00:09:28.947 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60473 00:09:28.947 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60473 ']' 00:09:28.947 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60473 00:09:28.947 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60473 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60473' 00:09:29.206 killing process with pid 60473 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60473 00:09:29.206 20:37:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60473 00:09:32.493 00:09:32.493 real 0m14.635s 00:09:32.493 user 0m15.551s 00:09:32.493 sys 0m1.860s 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 ************************************ 00:09:32.493 END TEST locking_app_on_unlocked_coremask 00:09:32.493 ************************************ 00:09:32.493 20:37:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:32.493 20:37:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:32.493 20:37:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.493 20:37:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 ************************************ 00:09:32.493 START TEST locking_app_on_locked_coremask 00:09:32.493 ************************************ 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60643 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60643 /var/tmp/spdk.sock 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60643 ']' 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.493 20:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 [2024-11-26 20:37:27.126929] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:32.493 [2024-11-26 20:37:27.127158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60643 ] 00:09:32.493 [2024-11-26 20:37:27.333906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.751 [2024-11-26 20:37:27.507786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60670 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60670 /var/tmp/spdk2.sock 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60670 /var/tmp/spdk2.sock 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60670 /var/tmp/spdk2.sock 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60670 ']' 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.129 20:37:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.129 [2024-11-26 20:37:28.903357] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:34.129 [2024-11-26 20:37:28.903563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:09:34.402 [2024-11-26 20:37:29.130484] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60643 has claimed it. 00:09:34.402 [2024-11-26 20:37:29.130610] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:34.663 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60670) - No such process 00:09:34.663 ERROR: process (pid: 60670) is no longer running 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60643 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60643 00:09:34.663 20:37:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60643 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60643 ']' 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60643 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60643 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.230 killing process with pid 60643 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60643' 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60643 00:09:35.230 20:37:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60643 00:09:38.513 00:09:38.513 real 0m6.360s 00:09:38.513 user 0m6.596s 00:09:38.513 sys 0m1.299s 00:09:38.513 20:37:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.513 20:37:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.513 ************************************ 00:09:38.513 END TEST locking_app_on_locked_coremask 00:09:38.513 ************************************ 00:09:38.514 20:37:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:38.514 20:37:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.514 20:37:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.514 20:37:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.514 ************************************ 00:09:38.514 START TEST locking_overlapped_coremask 00:09:38.514 ************************************ 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:38.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60745 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60745 /var/tmp/spdk.sock 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60745 ']' 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.514 20:37:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.772 [2024-11-26 20:37:33.585742] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:38.772 [2024-11-26 20:37:33.585944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:09:39.030 [2024-11-26 20:37:33.802749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.030 [2024-11-26 20:37:34.015906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.030 [2024-11-26 20:37:34.015996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.030 [2024-11-26 20:37:34.016013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60774 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60774 /var/tmp/spdk2.sock 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60774 /var/tmp/spdk2.sock 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60774 /var/tmp/spdk2.sock 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60774 ']' 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.928 20:37:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.929 [2024-11-26 20:37:35.596771] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:40.929 [2024-11-26 20:37:35.597684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:09:40.929 [2024-11-26 20:37:35.820201] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60745 has claimed it. 00:09:40.929 [2024-11-26 20:37:35.820320] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:41.494 ERROR: process (pid: 60774) is no longer running 00:09:41.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60774) - No such process 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60745 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60745 ']' 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60745 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60745 00:09:41.494 killing process with pid 60745 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60745' 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60745 00:09:41.494 20:37:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60745 00:09:44.776 00:09:44.776 real 0m6.286s 00:09:44.776 user 0m17.131s 00:09:44.776 sys 0m1.022s 00:09:44.776 ************************************ 00:09:44.776 END TEST locking_overlapped_coremask 00:09:44.776 ************************************ 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.776 20:37:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:44.776 20:37:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.776 20:37:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.776 20:37:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:44.776 ************************************ 00:09:44.776 START TEST locking_overlapped_coremask_via_rpc 00:09:44.776 ************************************ 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60855 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60855 /var/tmp/spdk.sock 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60855 ']' 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.776 20:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.036 [2024-11-26 20:37:39.907894] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:45.036 [2024-11-26 20:37:39.908142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60855 ] 00:09:45.296 [2024-11-26 20:37:40.115183] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:45.296 [2024-11-26 20:37:40.115639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.296 [2024-11-26 20:37:40.258710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.296 [2024-11-26 20:37:40.258905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.296 [2024-11-26 20:37:40.258931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60878 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60878 /var/tmp/spdk2.sock 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60878 ']' 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.674 20:37:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.674 [2024-11-26 20:37:41.485451] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:46.674 [2024-11-26 20:37:41.485983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:09:46.931 [2024-11-26 20:37:41.717821] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:46.931 [2024-11-26 20:37:41.717950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.188 [2024-11-26 20:37:42.070930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.189 [2024-11-26 20:37:42.074766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.189 [2024-11-26 20:37:42.074794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.715 [2024-11-26 20:37:44.571974] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60855 has claimed it. 00:09:49.715 request: 00:09:49.715 { 00:09:49.715 "method": "framework_enable_cpumask_locks", 00:09:49.715 "req_id": 1 00:09:49.715 } 00:09:49.715 Got JSON-RPC error response 00:09:49.715 response: 00:09:49.715 { 00:09:49.715 "code": -32603, 00:09:49.715 "message": "Failed to claim CPU core: 2" 00:09:49.715 } 00:09:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60855 /var/tmp/spdk.sock 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60855 ']' 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.715 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60878 /var/tmp/spdk2.sock 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60878 ']' 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:49.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.972 20:37:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:50.229 ************************************ 00:09:50.229 END TEST locking_overlapped_coremask_via_rpc 00:09:50.229 ************************************ 00:09:50.229 00:09:50.229 real 0m5.461s 00:09:50.229 user 0m1.893s 00:09:50.229 sys 0m0.325s 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.229 20:37:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.487 20:37:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:50.487 20:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60855 ]] 00:09:50.487 20:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60855 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60855 ']' 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60855 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60855 00:09:50.487 killing process with pid 60855 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60855' 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60855 00:09:50.487 20:37:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60855 00:09:53.773 20:37:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60878 ]] 00:09:53.773 20:37:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60878 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60878 ']' 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60878 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60878 00:09:53.773 killing process with pid 60878 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60878' 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60878 00:09:53.773 20:37:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60878 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:57.082 Process with pid 60855 is not found 00:09:57.082 Process with pid 60878 is not found 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60855 ]] 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60855 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60855 ']' 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60855 00:09:57.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60855) - No such process 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60855 is not found' 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60878 ]] 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60878 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60878 ']' 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60878 00:09:57.082 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60878) - No such process 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60878 is not found' 00:09:57.082 20:37:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:57.082 00:09:57.082 real 1m4.581s 00:09:57.082 user 1m50.641s 00:09:57.082 sys 0m9.783s 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.082 20:37:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.082 ************************************ 00:09:57.082 END TEST cpu_locks 00:09:57.082 ************************************ 00:09:57.082 00:09:57.082 real 1m40.540s 00:09:57.082 user 3m3.737s 00:09:57.082 sys 0m15.252s 00:09:57.082 20:37:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.082 20:37:51 event -- common/autotest_common.sh@10 -- # set +x 00:09:57.082 ************************************ 00:09:57.082 END TEST event 00:09:57.082 ************************************ 00:09:57.082 20:37:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:57.082 20:37:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.082 20:37:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.082 20:37:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.082 ************************************ 00:09:57.082 START TEST thread 00:09:57.082 ************************************ 00:09:57.082 20:37:51 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:57.082 * Looking for test storage... 00:09:57.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:57.082 20:37:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.082 20:37:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.082 20:37:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.082 20:37:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.082 20:37:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.082 20:37:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.082 20:37:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.082 20:37:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.082 20:37:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.082 20:37:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.082 20:37:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.082 20:37:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.082 20:37:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.082 20:37:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.082 20:37:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.082 20:37:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:57.082 20:37:51 thread -- scripts/common.sh@345 -- # : 1 00:09:57.082 20:37:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.082 20:37:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.082 20:37:51 thread -- scripts/common.sh@365 -- # decimal 1 00:09:57.082 20:37:51 thread -- scripts/common.sh@353 -- # local d=1 00:09:57.082 20:37:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.082 20:37:51 thread -- scripts/common.sh@355 -- # echo 1 00:09:57.082 20:37:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.082 20:37:51 thread -- scripts/common.sh@366 -- # decimal 2 00:09:57.082 20:37:51 thread -- scripts/common.sh@353 -- # local d=2 00:09:57.083 20:37:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.083 20:37:51 thread -- scripts/common.sh@355 -- # echo 2 00:09:57.083 20:37:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.083 20:37:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.083 20:37:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.083 20:37:51 thread -- scripts/common.sh@368 -- # return 0 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.083 --rc genhtml_branch_coverage=1 00:09:57.083 --rc genhtml_function_coverage=1 00:09:57.083 --rc genhtml_legend=1 00:09:57.083 --rc geninfo_all_blocks=1 00:09:57.083 --rc geninfo_unexecuted_blocks=1 00:09:57.083 00:09:57.083 ' 00:09:57.083 20:37:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.083 20:37:51 thread -- common/autotest_common.sh@10 -- # set +x 00:09:57.083 ************************************ 00:09:57.083 START TEST thread_poller_perf 00:09:57.083 ************************************ 00:09:57.083 20:37:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:57.083 [2024-11-26 20:37:51.782338] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:57.083 [2024-11-26 20:37:51.782818] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:09:57.083 [2024-11-26 20:37:51.989228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.342 [2024-11-26 20:37:52.161941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.342 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:58.730 [2024-11-26T20:37:53.724Z] ====================================== 00:09:58.730 [2024-11-26T20:37:53.724Z] busy:2114530688 (cyc) 00:09:58.730 [2024-11-26T20:37:53.724Z] total_run_count: 282000 00:09:58.730 [2024-11-26T20:37:53.724Z] tsc_hz: 2100000000 (cyc) 00:09:58.730 [2024-11-26T20:37:53.724Z] ====================================== 00:09:58.730 [2024-11-26T20:37:53.724Z] poller_cost: 7498 (cyc), 3570 (nsec) 00:09:58.730 00:09:58.730 real 0m1.733s 00:09:58.730 user 0m1.491s 00:09:58.730 sys 0m0.128s 00:09:58.730 20:37:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.730 20:37:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:58.730 ************************************ 00:09:58.730 END TEST thread_poller_perf 00:09:58.730 ************************************ 00:09:58.730 20:37:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:58.730 20:37:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:58.730 20:37:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.730 20:37:53 thread -- common/autotest_common.sh@10 -- # set +x 00:09:58.730 ************************************ 00:09:58.730 START TEST thread_poller_perf 00:09:58.730 ************************************ 00:09:58.730 20:37:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:58.730 [2024-11-26 20:37:53.587785] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:09:58.730 [2024-11-26 20:37:53.588008] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:09:58.988 [2024-11-26 20:37:53.825917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.246 [2024-11-26 20:37:54.027385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.246 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:00.621 [2024-11-26T20:37:55.616Z] ====================================== 00:10:00.622 [2024-11-26T20:37:55.616Z] busy:2107495794 (cyc) 00:10:00.622 [2024-11-26T20:37:55.616Z] total_run_count: 3724000 00:10:00.622 [2024-11-26T20:37:55.616Z] tsc_hz: 2100000000 (cyc) 00:10:00.622 [2024-11-26T20:37:55.616Z] ====================================== 00:10:00.622 [2024-11-26T20:37:55.616Z] poller_cost: 565 (cyc), 269 (nsec) 00:10:00.622 00:10:00.622 real 0m1.782s 00:10:00.622 user 0m1.517s 00:10:00.622 sys 0m0.152s 00:10:00.622 ************************************ 00:10:00.622 END TEST thread_poller_perf 00:10:00.622 ************************************ 00:10:00.622 20:37:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.622 20:37:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:00.622 20:37:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:00.622 ************************************ 00:10:00.622 END TEST thread 00:10:00.622 ************************************ 00:10:00.622 00:10:00.622 real 0m3.867s 00:10:00.622 user 0m3.190s 00:10:00.622 sys 0m0.454s 00:10:00.622 20:37:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.622 20:37:55 thread -- common/autotest_common.sh@10 -- # set +x 00:10:00.622 20:37:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:00.622 20:37:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:00.622 20:37:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.622 20:37:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.622 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:10:00.622 ************************************ 00:10:00.622 START TEST app_cmdline 00:10:00.622 ************************************ 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:00.622 * Looking for test storage... 00:10:00.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.622 20:37:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.622 --rc genhtml_branch_coverage=1 00:10:00.622 --rc genhtml_function_coverage=1 00:10:00.622 --rc genhtml_legend=1 00:10:00.622 --rc geninfo_all_blocks=1 00:10:00.622 --rc geninfo_unexecuted_blocks=1 00:10:00.622 00:10:00.622 ' 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.622 --rc genhtml_branch_coverage=1 00:10:00.622 --rc genhtml_function_coverage=1 00:10:00.622 --rc genhtml_legend=1 00:10:00.622 --rc geninfo_all_blocks=1 00:10:00.622 --rc geninfo_unexecuted_blocks=1 00:10:00.622 00:10:00.622 ' 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.622 --rc genhtml_branch_coverage=1 00:10:00.622 --rc genhtml_function_coverage=1 00:10:00.622 --rc genhtml_legend=1 00:10:00.622 --rc geninfo_all_blocks=1 00:10:00.622 --rc geninfo_unexecuted_blocks=1 00:10:00.622 00:10:00.622 ' 00:10:00.622 20:37:55 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.622 --rc genhtml_branch_coverage=1 00:10:00.622 --rc genhtml_function_coverage=1 00:10:00.622 --rc genhtml_legend=1 00:10:00.622 --rc geninfo_all_blocks=1 00:10:00.622 --rc geninfo_unexecuted_blocks=1 00:10:00.622 00:10:00.622 ' 00:10:00.622 20:37:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:00.880 20:37:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61222 00:10:00.880 20:37:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:00.880 20:37:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61222 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61222 ']' 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.880 20:37:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:00.880 [2024-11-26 20:37:55.736934] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:00.880 [2024-11-26 20:37:55.737273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61222 ] 00:10:01.139 [2024-11-26 20:37:55.928403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.139 [2024-11-26 20:37:56.104886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.516 20:37:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.516 20:37:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:02.516 20:37:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:02.775 { 00:10:02.775 "version": "SPDK v25.01-pre git sha1 2f2acf4eb", 00:10:02.775 "fields": { 00:10:02.775 "major": 25, 00:10:02.775 "minor": 1, 00:10:02.775 "patch": 0, 00:10:02.775 "suffix": "-pre", 00:10:02.775 "commit": "2f2acf4eb" 00:10:02.775 } 00:10:02.775 } 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:02.775 20:37:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:02.775 20:37:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:03.034 request: 00:10:03.034 { 00:10:03.034 "method": "env_dpdk_get_mem_stats", 00:10:03.034 "req_id": 1 00:10:03.034 } 00:10:03.034 Got JSON-RPC error response 00:10:03.034 response: 00:10:03.034 { 00:10:03.034 "code": -32601, 00:10:03.034 "message": "Method not found" 00:10:03.034 } 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.034 20:37:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61222 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61222 ']' 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61222 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.034 20:37:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61222 00:10:03.034 killing process with pid 61222 00:10:03.034 20:37:58 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.034 20:37:58 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.034 20:37:58 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61222' 00:10:03.034 20:37:58 app_cmdline -- common/autotest_common.sh@973 -- # kill 61222 00:10:03.034 20:37:58 app_cmdline -- common/autotest_common.sh@978 -- # wait 61222 00:10:06.329 00:10:06.329 real 0m5.519s 00:10:06.329 user 0m6.036s 00:10:06.329 sys 0m0.765s 00:10:06.329 20:38:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.329 20:38:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:06.329 ************************************ 00:10:06.329 END TEST app_cmdline 00:10:06.329 ************************************ 00:10:06.329 20:38:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:06.329 20:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.329 20:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.329 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:10:06.329 ************************************ 00:10:06.329 START TEST version 00:10:06.329 ************************************ 00:10:06.329 20:38:00 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:06.329 * Looking for test storage... 00:10:06.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:06.329 20:38:01 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.329 20:38:01 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.329 20:38:01 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.329 20:38:01 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.329 20:38:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.329 20:38:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.329 20:38:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.329 20:38:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.329 20:38:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.329 20:38:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.329 20:38:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.329 20:38:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.329 20:38:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.329 20:38:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.329 20:38:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.329 20:38:01 version -- scripts/common.sh@344 -- # case "$op" in 00:10:06.329 20:38:01 version -- scripts/common.sh@345 -- # : 1 00:10:06.329 20:38:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.329 20:38:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.329 20:38:01 version -- scripts/common.sh@365 -- # decimal 1 00:10:06.329 20:38:01 version -- scripts/common.sh@353 -- # local d=1 00:10:06.329 20:38:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.329 20:38:01 version -- scripts/common.sh@355 -- # echo 1 00:10:06.329 20:38:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.329 20:38:01 version -- scripts/common.sh@366 -- # decimal 2 00:10:06.329 20:38:01 version -- scripts/common.sh@353 -- # local d=2 00:10:06.329 20:38:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.329 20:38:01 version -- scripts/common.sh@355 -- # echo 2 00:10:06.329 20:38:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.329 20:38:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.329 20:38:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.329 20:38:01 version -- scripts/common.sh@368 -- # return 0 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.330 --rc genhtml_branch_coverage=1 00:10:06.330 --rc genhtml_function_coverage=1 00:10:06.330 --rc genhtml_legend=1 00:10:06.330 --rc geninfo_all_blocks=1 00:10:06.330 --rc geninfo_unexecuted_blocks=1 00:10:06.330 00:10:06.330 ' 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.330 --rc genhtml_branch_coverage=1 00:10:06.330 --rc genhtml_function_coverage=1 00:10:06.330 --rc genhtml_legend=1 00:10:06.330 --rc geninfo_all_blocks=1 00:10:06.330 --rc geninfo_unexecuted_blocks=1 00:10:06.330 00:10:06.330 ' 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.330 --rc genhtml_branch_coverage=1 00:10:06.330 --rc genhtml_function_coverage=1 00:10:06.330 --rc genhtml_legend=1 00:10:06.330 --rc geninfo_all_blocks=1 00:10:06.330 --rc geninfo_unexecuted_blocks=1 00:10:06.330 00:10:06.330 ' 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.330 --rc genhtml_branch_coverage=1 00:10:06.330 --rc genhtml_function_coverage=1 00:10:06.330 --rc genhtml_legend=1 00:10:06.330 --rc geninfo_all_blocks=1 00:10:06.330 --rc geninfo_unexecuted_blocks=1 00:10:06.330 00:10:06.330 ' 00:10:06.330 20:38:01 version -- app/version.sh@17 -- # get_header_version major 00:10:06.330 20:38:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # cut -f2 00:10:06.330 20:38:01 version -- app/version.sh@17 -- # major=25 00:10:06.330 20:38:01 version -- app/version.sh@18 -- # get_header_version minor 00:10:06.330 20:38:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # cut -f2 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.330 20:38:01 version -- app/version.sh@18 -- # minor=1 00:10:06.330 20:38:01 version -- app/version.sh@19 -- # get_header_version patch 00:10:06.330 20:38:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # cut -f2 00:10:06.330 20:38:01 version -- app/version.sh@19 -- # patch=0 00:10:06.330 20:38:01 version -- app/version.sh@20 -- # get_header_version suffix 00:10:06.330 20:38:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # cut -f2 00:10:06.330 20:38:01 version -- app/version.sh@14 -- # tr -d '"' 00:10:06.330 20:38:01 version -- app/version.sh@20 -- # suffix=-pre 00:10:06.330 20:38:01 version -- app/version.sh@22 -- # version=25.1 00:10:06.330 20:38:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:06.330 20:38:01 version -- app/version.sh@28 -- # version=25.1rc0 00:10:06.330 20:38:01 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:06.330 20:38:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:06.330 20:38:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:06.330 20:38:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:06.330 00:10:06.330 real 0m0.298s 00:10:06.330 user 0m0.190s 00:10:06.330 sys 0m0.149s 00:10:06.330 ************************************ 00:10:06.330 END TEST version 00:10:06.330 ************************************ 00:10:06.330 20:38:01 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.330 20:38:01 version -- common/autotest_common.sh@10 -- # set +x 00:10:06.587 20:38:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:06.587 20:38:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:06.587 20:38:01 -- spdk/autotest.sh@194 -- # uname -s 00:10:06.587 20:38:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:06.588 20:38:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:06.588 20:38:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:06.588 20:38:01 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:06.588 20:38:01 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:06.588 20:38:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.588 20:38:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.588 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:10:06.588 ************************************ 00:10:06.588 START TEST blockdev_nvme 00:10:06.588 ************************************ 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:06.588 * Looking for test storage... 00:10:06.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.588 20:38:01 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.588 --rc genhtml_branch_coverage=1 00:10:06.588 --rc genhtml_function_coverage=1 00:10:06.588 --rc genhtml_legend=1 00:10:06.588 --rc geninfo_all_blocks=1 00:10:06.588 --rc geninfo_unexecuted_blocks=1 00:10:06.588 00:10:06.588 ' 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.588 --rc genhtml_branch_coverage=1 00:10:06.588 --rc genhtml_function_coverage=1 00:10:06.588 --rc genhtml_legend=1 00:10:06.588 --rc geninfo_all_blocks=1 00:10:06.588 --rc geninfo_unexecuted_blocks=1 00:10:06.588 00:10:06.588 ' 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.588 --rc genhtml_branch_coverage=1 00:10:06.588 --rc genhtml_function_coverage=1 00:10:06.588 --rc genhtml_legend=1 00:10:06.588 --rc geninfo_all_blocks=1 00:10:06.588 --rc geninfo_unexecuted_blocks=1 00:10:06.588 00:10:06.588 ' 00:10:06.588 20:38:01 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.588 --rc genhtml_branch_coverage=1 00:10:06.588 --rc genhtml_function_coverage=1 00:10:06.588 --rc genhtml_legend=1 00:10:06.588 --rc geninfo_all_blocks=1 00:10:06.588 --rc geninfo_unexecuted_blocks=1 00:10:06.588 00:10:06.588 ' 00:10:06.588 20:38:01 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:06.588 20:38:01 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61428 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61428 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61428 ']' 00:10:06.846 20:38:01 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.846 20:38:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.846 [2024-11-26 20:38:01.761107] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:06.846 [2024-11-26 20:38:01.761585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61428 ] 00:10:07.104 [2024-11-26 20:38:01.970882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.362 [2024-11-26 20:38:02.171463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.733 20:38:03 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.733 20:38:03 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:08.733 20:38:03 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:08.733 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.733 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:08.990 20:38:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.990 20:38:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 20:38:04 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.249 20:38:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:09.249 20:38:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:09.250 20:38:04 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2bf7d8e3-3293-4505-9797-51670ce72373"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2bf7d8e3-3293-4505-9797-51670ce72373",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "0a66d661-55de-411a-8ea0-4832b26df595"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0a66d661-55de-411a-8ea0-4832b26df595",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c4d154e2-68f3-44b5-a669-22f00952d3d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c4d154e2-68f3-44b5-a669-22f00952d3d4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "222340ed-af64-4305-9c8f-d3e0a7322abe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "222340ed-af64-4305-9c8f-d3e0a7322abe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6c6c3c03-4004-4a13-ac8d-61bf5e1173e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c6c3c03-4004-4a13-ac8d-61bf5e1173e7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ddc62d1a-b2d5-4007-bd9d-8c4a3ceec98a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ddc62d1a-b2d5-4007-bd9d-8c4a3ceec98a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:09.250 20:38:04 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:09.250 20:38:04 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:09.250 20:38:04 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:09.250 20:38:04 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61428 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61428 ']' 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61428 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61428 00:10:09.250 killing process with pid 61428 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61428' 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61428 00:10:09.250 20:38:04 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61428 00:10:12.565 20:38:07 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:12.565 20:38:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:12.565 20:38:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:12.565 20:38:07 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.565 20:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.565 ************************************ 00:10:12.565 START TEST bdev_hello_world 00:10:12.565 ************************************ 00:10:12.565 20:38:07 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:12.565 [2024-11-26 20:38:07.407194] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:12.565 [2024-11-26 20:38:07.407435] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61534 ] 00:10:12.823 [2024-11-26 20:38:07.617475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.081 [2024-11-26 20:38:07.815942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.015 [2024-11-26 20:38:08.649488] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:14.015 [2024-11-26 20:38:08.649569] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:14.015 [2024-11-26 20:38:08.649607] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:14.015 [2024-11-26 20:38:08.653891] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:14.015 [2024-11-26 20:38:08.654479] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:14.015 [2024-11-26 20:38:08.654520] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:14.015 [2024-11-26 20:38:08.654685] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:14.015 00:10:14.015 [2024-11-26 20:38:08.654715] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:15.387 00:10:15.387 real 0m2.828s 00:10:15.387 user 0m2.299s 00:10:15.387 sys 0m0.412s 00:10:15.387 20:38:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.387 ************************************ 00:10:15.387 END TEST bdev_hello_world 00:10:15.387 20:38:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:15.387 ************************************ 00:10:15.387 20:38:10 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:15.387 20:38:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.387 20:38:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.387 20:38:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.387 ************************************ 00:10:15.387 START TEST bdev_bounds 00:10:15.387 ************************************ 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61587 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:15.387 Process bdevio pid: 61587 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61587' 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61587 00:10:15.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61587 ']' 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.387 20:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:15.387 [2024-11-26 20:38:10.299704] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:15.387 [2024-11-26 20:38:10.300188] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61587 ] 00:10:15.644 [2024-11-26 20:38:10.503895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.902 [2024-11-26 20:38:10.669728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.902 [2024-11-26 20:38:10.669838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.902 [2024-11-26 20:38:10.669885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.548 20:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.548 20:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:16.548 20:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:16.811 I/O targets: 00:10:16.811 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:16.811 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:16.811 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:16.811 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:16.811 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:16.812 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:16.812 00:10:16.812 00:10:16.812 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.812 http://cunit.sourceforge.net/ 00:10:16.812 00:10:16.812 00:10:16.812 Suite: bdevio tests on: Nvme3n1 00:10:16.812 Test: blockdev write read block ...passed 00:10:16.812 Test: blockdev write zeroes read block ...passed 00:10:16.812 Test: blockdev write zeroes read no split ...passed 00:10:16.812 Test: blockdev write zeroes read split ...passed 00:10:16.812 Test: blockdev write zeroes read split partial ...passed 00:10:16.812 Test: blockdev reset ...[2024-11-26 20:38:11.742195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:16.812 passed 00:10:16.812 Test: blockdev write read 8 blocks ...[2024-11-26 20:38:11.746796] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:16.812 passed 00:10:16.812 Test: blockdev write read size > 128k ...passed 00:10:16.812 Test: blockdev write read invalid size ...passed 00:10:16.812 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:16.812 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:16.812 Test: blockdev write read max offset ...passed 00:10:16.812 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:16.812 Test: blockdev writev readv 8 blocks ...passed 00:10:16.812 Test: blockdev writev readv 30 x 1block ...passed 00:10:16.812 Test: blockdev writev readv block ...passed 00:10:16.812 Test: blockdev writev readv size > 128k ...passed 00:10:16.812 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:16.812 Test: blockdev comparev and writev ...[2024-11-26 20:38:11.755643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:10:16.812 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b280a000 len:0x1000 00:10:16.812 [2024-11-26 20:38:11.755944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:16.812 passed 00:10:16.812 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:38:11.756970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:16.812 passed 00:10:16.812 Test: blockdev nvme admin passthru ...[2024-11-26 20:38:11.757220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:16.812 passed 00:10:16.812 Test: blockdev copy ...passed 00:10:16.812 Suite: bdevio tests on: Nvme2n3 00:10:16.812 Test: blockdev write read block ...passed 00:10:16.812 Test: blockdev write zeroes read block ...passed 00:10:16.812 Test: blockdev write zeroes read no split ...passed 00:10:17.070 Test: blockdev write zeroes read split ...passed 00:10:17.070 Test: blockdev write zeroes read split partial ...passed 00:10:17.070 Test: blockdev reset ...[2024-11-26 20:38:11.873794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.070 [2024-11-26 20:38:11.879085] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:17.070 passed 00:10:17.070 Test: blockdev write read 8 blocks ...passed 00:10:17.070 Test: blockdev write read size > 128k ...passed 00:10:17.070 Test: blockdev write read invalid size ...passed 00:10:17.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.070 Test: blockdev write read max offset ...passed 00:10:17.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.070 Test: blockdev writev readv 8 blocks ...passed 00:10:17.070 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.070 Test: blockdev writev readv block ...passed 00:10:17.070 Test: blockdev writev readv size > 128k ...passed 00:10:17.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.070 Test: blockdev comparev and writev ...[2024-11-26 20:38:11.889780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:10:17.070 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x295a06000 len:0x1000 00:10:17.070 [2024-11-26 20:38:11.889986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.070 passed 00:10:17.070 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:38:11.890882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.070 [2024-11-26 20:38:11.891035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.070 passed 00:10:17.070 Test: blockdev nvme admin passthru ...passed 00:10:17.070 Test: blockdev copy ...passed 00:10:17.070 Suite: bdevio tests on: Nvme2n2 00:10:17.070 Test: blockdev write read block ...passed 00:10:17.070 Test: blockdev write zeroes read block ...passed 00:10:17.070 Test: blockdev write zeroes read no split ...passed 00:10:17.070 Test: blockdev write zeroes read split ...passed 00:10:17.070 Test: blockdev write zeroes read split partial ...passed 00:10:17.070 Test: blockdev reset ...[2024-11-26 20:38:11.994590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.070 [2024-11-26 20:38:12.000467] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:17.070 Test: blockdev write read 8 blocks ...uccessful. 00:10:17.070 passed 00:10:17.070 Test: blockdev write read size > 128k ...passed 00:10:17.070 Test: blockdev write read invalid size ...passed 00:10:17.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.070 Test: blockdev write read max offset ...passed 00:10:17.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.070 Test: blockdev writev readv 8 blocks ...passed 00:10:17.070 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.070 Test: blockdev writev readv block ...passed 00:10:17.070 Test: blockdev writev readv size > 128k ...passed 00:10:17.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.070 Test: blockdev comparev and writev ...[2024-11-26 20:38:12.010871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c283c000 len:0x1000 00:10:17.070 [2024-11-26 20:38:12.010940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.070 passed 00:10:17.070 Test: blockdev nvme passthru rw ...passed 00:10:17.070 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.070 Test: blockdev nvme admin passthru ...[2024-11-26 20:38:12.011827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.070 [2024-11-26 20:38:12.011875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.070 passed 00:10:17.070 Test: blockdev copy ...passed 00:10:17.070 Suite: bdevio tests on: Nvme2n1 00:10:17.070 Test: blockdev write read block ...passed 00:10:17.070 Test: blockdev write zeroes read block ...passed 00:10:17.070 Test: blockdev write zeroes read no split ...passed 00:10:17.328 Test: blockdev write zeroes read split ...passed 00:10:17.328 Test: blockdev write zeroes read split partial ...passed 00:10:17.328 Test: blockdev reset ...[2024-11-26 20:38:12.120106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:17.328 [2024-11-26 20:38:12.125416] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:17.328 passed 00:10:17.328 Test: blockdev write read 8 blocks ...passed 00:10:17.328 Test: blockdev write read size > 128k ...passed 00:10:17.328 Test: blockdev write read invalid size ...passed 00:10:17.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.328 Test: blockdev write read max offset ...passed 00:10:17.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.328 Test: blockdev writev readv 8 blocks ...passed 00:10:17.328 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.328 Test: blockdev writev readv block ...passed 00:10:17.328 Test: blockdev writev readv size > 128k ...passed 00:10:17.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.328 Test: blockdev comparev and writev ...[2024-11-26 20:38:12.137536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:10:17.328 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c2838000 len:0x1000 00:10:17.328 [2024-11-26 20:38:12.137744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.328 passed 00:10:17.328 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:38:12.138487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.328 [2024-11-26 20:38:12.138525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.328 passed 00:10:17.328 Test: blockdev nvme admin passthru ...passed 00:10:17.328 Test: blockdev copy ...passed 00:10:17.328 Suite: bdevio tests on: Nvme1n1 00:10:17.328 Test: blockdev write read block ...passed 00:10:17.328 Test: blockdev write zeroes read block ...passed 00:10:17.328 Test: blockdev write zeroes read no split ...passed 00:10:17.328 Test: blockdev write zeroes read split ...passed 00:10:17.328 Test: blockdev write zeroes read split partial ...passed 00:10:17.328 Test: blockdev reset ...[2024-11-26 20:38:12.243123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:17.328 [2024-11-26 20:38:12.247961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:17.328 passed 00:10:17.328 Test: blockdev write read 8 blocks ...passed 00:10:17.328 Test: blockdev write read size > 128k ...passed 00:10:17.328 Test: blockdev write read invalid size ...passed 00:10:17.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.328 Test: blockdev write read max offset ...passed 00:10:17.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.328 Test: blockdev writev readv 8 blocks ...passed 00:10:17.328 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.328 Test: blockdev writev readv block ...passed 00:10:17.328 Test: blockdev writev readv size > 128k ...passed 00:10:17.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.329 Test: blockdev comparev and writev ...[2024-11-26 20:38:12.258240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2834000 len:0x1000 00:10:17.329 [2024-11-26 20:38:12.258437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:17.329 passed 00:10:17.329 Test: blockdev nvme passthru rw ...passed 00:10:17.329 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:38:12.259424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:17.329 [2024-11-26 20:38:12.259562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:17.329 passed 00:10:17.329 Test: blockdev nvme admin passthru ...passed 00:10:17.329 Test: blockdev copy ...passed 00:10:17.329 Suite: bdevio tests on: Nvme0n1 00:10:17.329 Test: blockdev write read block ...passed 00:10:17.329 Test: blockdev write zeroes read block ...passed 00:10:17.329 Test: blockdev write zeroes read no split ...passed 00:10:17.329 Test: blockdev write zeroes read split ...passed 00:10:17.587 Test: blockdev write zeroes read split partial ...passed 00:10:17.587 Test: blockdev reset ...[2024-11-26 20:38:12.366498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:17.587 [2024-11-26 20:38:12.371141] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:17.587 passed 00:10:17.587 Test: blockdev write read 8 blocks ...passed 00:10:17.587 Test: blockdev write read size > 128k ...passed 00:10:17.587 Test: blockdev write read invalid size ...passed 00:10:17.587 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.587 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.587 Test: blockdev write read max offset ...passed 00:10:17.587 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.587 Test: blockdev writev readv 8 blocks ...passed 00:10:17.587 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.587 Test: blockdev writev readv block ...passed 00:10:17.587 Test: blockdev writev readv size > 128k ...passed 00:10:17.587 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.587 Test: blockdev comparev and writev ...[2024-11-26 20:38:12.380086] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:17.587 separate metadata which is not supported yet. 00:10:17.587 passed 00:10:17.587 Test: blockdev nvme passthru rw ...passed 00:10:17.587 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:38:12.380963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:17.587 [2024-11-26 20:38:12.381020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:17.587 passed 00:10:17.587 Test: blockdev nvme admin passthru ...passed 00:10:17.587 Test: blockdev copy ...passed 00:10:17.587 00:10:17.587 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.587 suites 6 6 n/a 0 0 00:10:17.587 tests 138 138 138 0 0 00:10:17.587 asserts 893 893 893 0 n/a 00:10:17.587 00:10:17.587 Elapsed time = 2.016 seconds 00:10:17.587 0 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61587 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61587 ']' 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61587 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61587 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.587 killing process with pid 61587 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61587' 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61587 00:10:17.587 20:38:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61587 00:10:19.486 20:38:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:19.486 00:10:19.486 real 0m3.825s 00:10:19.486 user 0m9.891s 00:10:19.486 sys 0m0.610s 00:10:19.486 20:38:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.486 20:38:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:19.486 ************************************ 00:10:19.486 END TEST bdev_bounds 00:10:19.486 ************************************ 00:10:19.486 20:38:14 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:19.486 20:38:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.486 20:38:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.486 20:38:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.486 ************************************ 00:10:19.486 START TEST bdev_nbd 00:10:19.486 ************************************ 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61658 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61658 /var/tmp/spdk-nbd.sock 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61658 ']' 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.486 20:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:19.486 [2024-11-26 20:38:14.207309] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:19.486 [2024-11-26 20:38:14.207539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.486 [2024-11-26 20:38:14.422440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.744 [2024-11-26 20:38:14.630553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:20.679 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:21.244 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.245 1+0 records in 00:10:21.245 1+0 records out 00:10:21.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732174 s, 5.6 MB/s 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.245 20:38:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.245 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.245 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.245 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.245 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:21.245 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:21.503 1+0 records in 00:10:21.503 1+0 records out 00:10:21.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631072 s, 6.5 MB/s 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:21.503 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.761 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.019 1+0 records in 00:10:22.019 1+0 records out 00:10:22.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766165 s, 5.3 MB/s 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.019 20:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.278 1+0 records in 00:10:22.278 1+0 records out 00:10:22.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589221 s, 7.0 MB/s 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.278 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:22.535 1+0 records in 00:10:22.535 1+0 records out 00:10:22.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901439 s, 4.5 MB/s 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:22.535 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.102 1+0 records in 00:10:23.102 1+0 records out 00:10:23.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796488 s, 5.1 MB/s 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:23.102 20:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:23.359 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd0", 00:10:23.359 "bdev_name": "Nvme0n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd1", 00:10:23.359 "bdev_name": "Nvme1n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd2", 00:10:23.359 "bdev_name": "Nvme2n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd3", 00:10:23.359 "bdev_name": "Nvme2n2" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd4", 00:10:23.359 "bdev_name": "Nvme2n3" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd5", 00:10:23.359 "bdev_name": "Nvme3n1" 00:10:23.359 } 00:10:23.359 ]' 00:10:23.359 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:23.359 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:23.359 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd0", 00:10:23.359 "bdev_name": "Nvme0n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd1", 00:10:23.359 "bdev_name": "Nvme1n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd2", 00:10:23.359 "bdev_name": "Nvme2n1" 00:10:23.359 }, 00:10:23.359 { 00:10:23.359 "nbd_device": "/dev/nbd3", 00:10:23.359 "bdev_name": "Nvme2n2" 00:10:23.359 }, 00:10:23.359 { 00:10:23.360 "nbd_device": "/dev/nbd4", 00:10:23.360 "bdev_name": "Nvme2n3" 00:10:23.360 }, 00:10:23.360 { 00:10:23.360 "nbd_device": "/dev/nbd5", 00:10:23.360 "bdev_name": "Nvme3n1" 00:10:23.360 } 00:10:23.360 ]' 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.360 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.626 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:23.627 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.627 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.627 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.627 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.902 20:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.159 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.420 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:24.676 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.240 20:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:25.498 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:25.757 /dev/nbd0 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:25.757 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.757 1+0 records in 00:10:25.757 1+0 records out 00:10:25.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535984 s, 7.6 MB/s 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:25.758 20:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:26.324 /dev/nbd1 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:26.324 1+0 records in 00:10:26.324 1+0 records out 00:10:26.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607606 s, 6.7 MB/s 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:26.324 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:26.583 /dev/nbd10 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:26.583 1+0 records in 00:10:26.583 1+0 records out 00:10:26.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096491 s, 4.2 MB/s 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:26.583 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:27.150 /dev/nbd11 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.150 1+0 records in 00:10:27.150 1+0 records out 00:10:27.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758977 s, 5.4 MB/s 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.150 20:38:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:27.423 /dev/nbd12 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.423 1+0 records in 00:10:27.423 1+0 records out 00:10:27.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703864 s, 5.8 MB/s 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.423 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:27.713 /dev/nbd13 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.713 1+0 records in 00:10:27.713 1+0 records out 00:10:27.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077991 s, 5.3 MB/s 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.713 20:38:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd0", 00:10:28.281 "bdev_name": "Nvme0n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd1", 00:10:28.281 "bdev_name": "Nvme1n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd10", 00:10:28.281 "bdev_name": "Nvme2n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd11", 00:10:28.281 "bdev_name": "Nvme2n2" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd12", 00:10:28.281 "bdev_name": "Nvme2n3" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd13", 00:10:28.281 "bdev_name": "Nvme3n1" 00:10:28.281 } 00:10:28.281 ]' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd0", 00:10:28.281 "bdev_name": "Nvme0n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd1", 00:10:28.281 "bdev_name": "Nvme1n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd10", 00:10:28.281 "bdev_name": "Nvme2n1" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd11", 00:10:28.281 "bdev_name": "Nvme2n2" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd12", 00:10:28.281 "bdev_name": "Nvme2n3" 00:10:28.281 }, 00:10:28.281 { 00:10:28.281 "nbd_device": "/dev/nbd13", 00:10:28.281 "bdev_name": "Nvme3n1" 00:10:28.281 } 00:10:28.281 ]' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:28.281 /dev/nbd1 00:10:28.281 /dev/nbd10 00:10:28.281 /dev/nbd11 00:10:28.281 /dev/nbd12 00:10:28.281 /dev/nbd13' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:28.281 /dev/nbd1 00:10:28.281 /dev/nbd10 00:10:28.281 /dev/nbd11 00:10:28.281 /dev/nbd12 00:10:28.281 /dev/nbd13' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:28.281 256+0 records in 00:10:28.281 256+0 records out 00:10:28.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620511 s, 169 MB/s 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:28.281 256+0 records in 00:10:28.281 256+0 records out 00:10:28.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134271 s, 7.8 MB/s 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.281 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:28.540 256+0 records in 00:10:28.540 256+0 records out 00:10:28.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146145 s, 7.2 MB/s 00:10:28.540 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.540 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:28.540 256+0 records in 00:10:28.540 256+0 records out 00:10:28.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141092 s, 7.4 MB/s 00:10:28.540 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.540 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:28.798 256+0 records in 00:10:28.798 256+0 records out 00:10:28.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143076 s, 7.3 MB/s 00:10:28.798 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.798 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:29.056 256+0 records in 00:10:29.056 256+0 records out 00:10:29.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146053 s, 7.2 MB/s 00:10:29.056 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.056 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:29.056 256+0 records in 00:10:29.056 256+0 records out 00:10:29.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132209 s, 7.9 MB/s 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.057 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.624 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.883 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:30.141 20:38:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:30.141 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:30.141 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.142 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.400 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.659 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.917 20:38:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:31.484 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:31.743 malloc_lvol_verify 00:10:31.743 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:32.000 315b5d2e-5f56-4a82-b889-bb2c945e8241 00:10:32.001 20:38:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:32.258 66c1ab24-c069-46b6-af93-0f4f84cdaa94 00:10:32.518 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:32.776 /dev/nbd0 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:32.776 mke2fs 1.47.0 (5-Feb-2023) 00:10:32.776 Discarding device blocks: 0/4096 done 00:10:32.776 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:32.776 00:10:32.776 Allocating group tables: 0/1 done 00:10:32.776 Writing inode tables: 0/1 done 00:10:32.776 Creating journal (1024 blocks): done 00:10:32.776 Writing superblocks and filesystem accounting information: 0/1 done 00:10:32.776 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.776 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61658 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61658 ']' 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61658 00:10:33.034 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61658 00:10:33.035 killing process with pid 61658 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61658' 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61658 00:10:33.035 20:38:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61658 00:10:34.933 20:38:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:34.933 00:10:34.933 real 0m15.456s 00:10:34.933 user 0m21.029s 00:10:34.933 sys 0m6.056s 00:10:34.933 20:38:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.933 20:38:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 ************************************ 00:10:34.933 END TEST bdev_nbd 00:10:34.933 ************************************ 00:10:34.933 20:38:29 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:34.933 20:38:29 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:34.933 skipping fio tests on NVMe due to multi-ns failures. 00:10:34.933 20:38:29 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:34.933 20:38:29 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:34.933 20:38:29 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:34.933 20:38:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:34.933 20:38:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.933 20:38:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.933 ************************************ 00:10:34.933 START TEST bdev_verify 00:10:34.933 ************************************ 00:10:34.933 20:38:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:34.933 [2024-11-26 20:38:29.717982] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:34.933 [2024-11-26 20:38:29.718164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62090 ] 00:10:34.933 [2024-11-26 20:38:29.913817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.191 [2024-11-26 20:38:30.083191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.191 [2024-11-26 20:38:30.083220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.125 Running I/O for 5 seconds... 00:10:38.477 18432.00 IOPS, 72.00 MiB/s [2024-11-26T20:38:34.406Z] 17824.00 IOPS, 69.62 MiB/s [2024-11-26T20:38:35.340Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-26T20:38:36.275Z] 17200.00 IOPS, 67.19 MiB/s [2024-11-26T20:38:36.275Z] 17024.00 IOPS, 66.50 MiB/s 00:10:41.281 Latency(us) 00:10:41.281 [2024-11-26T20:38:36.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.281 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0xbd0bd 00:10:41.281 Nvme0n1 : 5.07 1363.34 5.33 0.00 0.00 93641.27 16727.28 116342.00 00:10:41.281 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:41.281 Nvme0n1 : 5.04 1422.15 5.56 0.00 0.00 89612.39 17725.93 89877.94 00:10:41.281 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0xa0000 00:10:41.281 Nvme1n1 : 5.07 1362.90 5.32 0.00 0.00 93522.01 16602.45 114344.72 00:10:41.281 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0xa0000 length 0xa0000 00:10:41.281 Nvme1n1 : 5.08 1436.28 5.61 0.00 0.00 88620.04 10860.25 83386.76 00:10:41.281 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0x80000 00:10:41.281 Nvme2n1 : 5.07 1362.46 5.32 0.00 0.00 93387.69 16727.28 109850.82 00:10:41.281 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x80000 length 0x80000 00:10:41.281 Nvme2n1 : 5.08 1435.90 5.61 0.00 0.00 88434.91 10985.08 80390.83 00:10:41.281 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0x80000 00:10:41.281 Nvme2n2 : 5.07 1362.07 5.32 0.00 0.00 93235.61 16477.62 107354.21 00:10:41.281 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x80000 length 0x80000 00:10:41.281 Nvme2n2 : 5.08 1435.47 5.61 0.00 0.00 88255.36 11047.50 80890.15 00:10:41.281 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0x80000 00:10:41.281 Nvme2n3 : 5.08 1361.62 5.32 0.00 0.00 93086.74 16352.79 107354.21 00:10:41.281 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x80000 length 0x80000 00:10:41.281 Nvme2n3 : 5.08 1435.05 5.61 0.00 0.00 88102.50 11421.99 83886.08 00:10:41.281 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x0 length 0x20000 00:10:41.281 Nvme3n1 : 5.08 1361.16 5.32 0.00 0.00 92925.34 10111.27 115842.68 00:10:41.281 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:41.281 Verification LBA range: start 0x20000 length 0x20000 00:10:41.281 Nvme3n1 : 5.09 1434.70 5.60 0.00 0.00 87980.42 11421.99 87880.66 00:10:41.281 [2024-11-26T20:38:36.275Z] =================================================================================================================== 00:10:41.281 [2024-11-26T20:38:36.275Z] Total : 16773.11 65.52 0.00 0.00 90837.35 10111.27 116342.00 00:10:43.182 00:10:43.182 real 0m8.222s 00:10:43.182 user 0m14.977s 00:10:43.182 sys 0m0.446s 00:10:43.182 20:38:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.182 ************************************ 00:10:43.182 END TEST bdev_verify 00:10:43.182 20:38:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:43.182 ************************************ 00:10:43.182 20:38:37 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:43.182 20:38:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:43.182 20:38:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.182 20:38:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.182 ************************************ 00:10:43.182 START TEST bdev_verify_big_io 00:10:43.182 ************************************ 00:10:43.182 20:38:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:43.182 [2024-11-26 20:38:37.992270] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:43.182 [2024-11-26 20:38:37.992450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62194 ] 00:10:43.441 [2024-11-26 20:38:38.195580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.441 [2024-11-26 20:38:38.369226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.441 [2024-11-26 20:38:38.369243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.396 Running I/O for 5 seconds... 00:10:50.519 367.00 IOPS, 22.94 MiB/s [2024-11-26T20:38:45.771Z] 1758.50 IOPS, 109.91 MiB/s [2024-11-26T20:38:46.030Z] 2193.33 IOPS, 137.08 MiB/s 00:10:51.036 Latency(us) 00:10:51.036 [2024-11-26T20:38:46.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.036 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0xbd0b 00:10:51.036 Nvme0n1 : 6.08 80.47 5.03 0.00 0.00 1491894.46 28586.18 1454025.39 00:10:51.036 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:51.036 Nvme0n1 : 6.00 81.94 5.12 0.00 0.00 1439184.26 17351.44 1589840.94 00:10:51.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0xa000 00:10:51.036 Nvme1n1 : 6.08 84.21 5.26 0.00 0.00 1403412.48 137812.85 1286253.23 00:10:51.036 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0xa000 length 0xa000 00:10:51.036 Nvme1n1 : 6.01 76.04 4.75 0.00 0.00 1495044.08 140808.78 2348810.24 00:10:51.036 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0x8000 00:10:51.036 Nvme2n1 : 6.08 84.16 5.26 0.00 0.00 1344837.73 138811.49 1238318.32 00:10:51.036 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x8000 length 0x8000 00:10:51.036 Nvme2n1 : 6.24 86.86 5.43 0.00 0.00 1280842.45 51679.82 2380766.84 00:10:51.036 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0x8000 00:10:51.036 Nvme2n2 : 6.19 93.11 5.82 0.00 0.00 1182865.42 44439.65 1294242.38 00:10:51.036 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x8000 length 0x8000 00:10:51.036 Nvme2n2 : 6.27 89.80 5.61 0.00 0.00 1175981.35 63413.88 2412723.44 00:10:51.036 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0x8000 00:10:51.036 Nvme2n3 : 6.24 90.08 5.63 0.00 0.00 1159647.80 45937.62 2093157.42 00:10:51.036 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x8000 length 0x8000 00:10:51.036 Nvme2n3 : 6.32 105.73 6.61 0.00 0.00 949241.59 16352.79 2428701.74 00:10:51.036 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x0 length 0x2000 00:10:51.036 Nvme3n1 : 6.26 106.90 6.68 0.00 0.00 944317.83 10236.10 1733645.65 00:10:51.036 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:51.036 Verification LBA range: start 0x2000 length 0x2000 00:10:51.036 Nvme3n1 : 6.47 181.25 11.33 0.00 0.00 542426.50 936.23 2492614.95 00:10:51.036 [2024-11-26T20:38:46.030Z] =================================================================================================================== 00:10:51.036 [2024-11-26T20:38:46.030Z] Total : 1160.55 72.53 0.00 0.00 1125432.05 936.23 2492614.95 00:10:52.941 00:10:52.941 real 0m9.785s 00:10:52.941 user 0m18.088s 00:10:52.941 sys 0m0.421s 00:10:52.941 20:38:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.941 20:38:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:52.941 ************************************ 00:10:52.941 END TEST bdev_verify_big_io 00:10:52.941 ************************************ 00:10:52.941 20:38:47 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.941 20:38:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:52.941 20:38:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.941 20:38:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.941 ************************************ 00:10:52.941 START TEST bdev_write_zeroes 00:10:52.941 ************************************ 00:10:52.941 20:38:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:52.941 [2024-11-26 20:38:47.836977] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:52.941 [2024-11-26 20:38:47.837156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62325 ] 00:10:53.200 [2024-11-26 20:38:48.043318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.458 [2024-11-26 20:38:48.248251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.393 Running I/O for 1 seconds... 00:10:55.326 52224.00 IOPS, 204.00 MiB/s 00:10:55.326 Latency(us) 00:10:55.326 [2024-11-26T20:38:50.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.326 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme0n1 : 1.03 8667.57 33.86 0.00 0.00 14733.74 8987.79 24466.77 00:10:55.326 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme1n1 : 1.03 8657.58 33.82 0.00 0.00 14730.38 11671.65 24217.11 00:10:55.326 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme2n1 : 1.03 8647.77 33.78 0.00 0.00 14711.09 11172.33 22469.49 00:10:55.326 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme2n2 : 1.03 8638.00 33.74 0.00 0.00 14647.70 9424.70 21595.67 00:10:55.326 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme2n3 : 1.03 8627.96 33.70 0.00 0.00 14618.90 8238.81 23093.64 00:10:55.326 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:55.326 Nvme3n1 : 1.03 8618.19 33.66 0.00 0.00 14610.84 6959.30 24716.43 00:10:55.326 [2024-11-26T20:38:50.320Z] =================================================================================================================== 00:10:55.326 [2024-11-26T20:38:50.320Z] Total : 51857.08 202.57 0.00 0.00 14675.44 6959.30 24716.43 00:10:56.701 00:10:56.701 real 0m3.850s 00:10:56.701 user 0m3.314s 00:10:56.701 sys 0m0.411s 00:10:56.701 20:38:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.701 20:38:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:56.701 ************************************ 00:10:56.701 END TEST bdev_write_zeroes 00:10:56.701 ************************************ 00:10:56.701 20:38:51 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.701 20:38:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:56.701 20:38:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.701 20:38:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.701 ************************************ 00:10:56.701 START TEST bdev_json_nonenclosed 00:10:56.701 ************************************ 00:10:56.702 20:38:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.961 [2024-11-26 20:38:51.763775] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:56.961 [2024-11-26 20:38:51.763959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62378 ] 00:10:57.219 [2024-11-26 20:38:51.960462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.219 [2024-11-26 20:38:52.122468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.219 [2024-11-26 20:38:52.122592] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:57.219 [2024-11-26 20:38:52.122630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:57.219 [2024-11-26 20:38:52.122645] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:57.478 00:10:57.478 real 0m0.806s 00:10:57.478 user 0m0.521s 00:10:57.478 sys 0m0.177s 00:10:57.478 20:38:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.478 20:38:52 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:57.478 ************************************ 00:10:57.478 END TEST bdev_json_nonenclosed 00:10:57.478 ************************************ 00:10:57.736 20:38:52 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:57.736 20:38:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:57.736 20:38:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.736 20:38:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.736 ************************************ 00:10:57.736 START TEST bdev_json_nonarray 00:10:57.736 ************************************ 00:10:57.736 20:38:52 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:57.736 [2024-11-26 20:38:52.630258] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:57.736 [2024-11-26 20:38:52.630445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62409 ] 00:10:57.994 [2024-11-26 20:38:52.828320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.254 [2024-11-26 20:38:52.996557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.254 [2024-11-26 20:38:52.996714] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:58.254 [2024-11-26 20:38:52.996745] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:58.255 [2024-11-26 20:38:52.996760] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:58.514 00:10:58.514 real 0m0.794s 00:10:58.514 user 0m0.510s 00:10:58.514 sys 0m0.177s 00:10:58.514 20:38:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.514 20:38:53 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:58.514 ************************************ 00:10:58.514 END TEST bdev_json_nonarray 00:10:58.514 ************************************ 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:58.514 20:38:53 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:58.514 00:10:58.514 real 0m51.998s 00:10:58.514 user 1m16.507s 00:10:58.514 sys 0m10.131s 00:10:58.514 20:38:53 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.514 20:38:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:58.514 ************************************ 00:10:58.514 END TEST blockdev_nvme 00:10:58.514 ************************************ 00:10:58.514 20:38:53 -- spdk/autotest.sh@209 -- # uname -s 00:10:58.514 20:38:53 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:58.514 20:38:53 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:58.514 20:38:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.514 20:38:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.514 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:10:58.514 ************************************ 00:10:58.514 START TEST blockdev_nvme_gpt 00:10:58.514 ************************************ 00:10:58.514 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:58.775 * Looking for test storage... 00:10:58.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.775 20:38:53 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.775 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.775 --rc genhtml_branch_coverage=1 00:10:58.775 --rc genhtml_function_coverage=1 00:10:58.776 --rc genhtml_legend=1 00:10:58.776 --rc geninfo_all_blocks=1 00:10:58.776 --rc geninfo_unexecuted_blocks=1 00:10:58.776 00:10:58.776 ' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.776 --rc genhtml_branch_coverage=1 00:10:58.776 --rc genhtml_function_coverage=1 00:10:58.776 --rc genhtml_legend=1 00:10:58.776 --rc geninfo_all_blocks=1 00:10:58.776 --rc geninfo_unexecuted_blocks=1 00:10:58.776 00:10:58.776 ' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.776 --rc genhtml_branch_coverage=1 00:10:58.776 --rc genhtml_function_coverage=1 00:10:58.776 --rc genhtml_legend=1 00:10:58.776 --rc geninfo_all_blocks=1 00:10:58.776 --rc geninfo_unexecuted_blocks=1 00:10:58.776 00:10:58.776 ' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.776 --rc genhtml_branch_coverage=1 00:10:58.776 --rc genhtml_function_coverage=1 00:10:58.776 --rc genhtml_legend=1 00:10:58.776 --rc geninfo_all_blocks=1 00:10:58.776 --rc geninfo_unexecuted_blocks=1 00:10:58.776 00:10:58.776 ' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62493 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62493 00:10:58.776 20:38:53 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62493 ']' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.776 20:38:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:59.036 [2024-11-26 20:38:53.830794] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:10:59.036 [2024-11-26 20:38:53.831061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:10:59.295 [2024-11-26 20:38:54.031159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.295 [2024-11-26 20:38:54.154680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.232 20:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.232 20:38:55 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:11:00.232 20:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:00.232 20:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:11:00.232 20:38:55 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:00.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:00.748 Waiting for block devices as requested 00:11:01.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.006 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.263 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.546 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:06.546 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:11:06.546 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:06.547 BYT; 00:11:06.547 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:06.547 BYT; 00:11:06.547 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:06.547 20:39:01 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:06.547 20:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:07.486 The operation has completed successfully. 00:11:07.486 20:39:02 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:08.419 The operation has completed successfully. 00:11:08.419 20:39:03 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:08.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.552 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.810 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.810 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.810 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:09.810 20:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.810 20:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:09.810 [] 00:11:09.810 20:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.810 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:09.810 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:09.810 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:09.810 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:10.068 20:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:10.068 20:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.068 20:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.328 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:10.328 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:10.703 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.703 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:10.703 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:10.704 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "aabe9bb5-49f9-4540-aa40-4a46f3e6e04b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "aabe9bb5-49f9-4540-aa40-4a46f3e6e04b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e1352757-474a-4e65-86ca-32ac2af6fc4c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1352757-474a-4e65-86ca-32ac2af6fc4c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e5f488bc-9900-4a5b-82e9-eed138595955"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5f488bc-9900-4a5b-82e9-eed138595955",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3614e8a7-db4f-4a46-93d1-cf38602e192d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3614e8a7-db4f-4a46-93d1-cf38602e192d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b87eebd3-cd26-4fe1-9cf8-c94887f535c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b87eebd3-cd26-4fe1-9cf8-c94887f535c2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:10.704 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:10.704 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:10.704 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:10.704 20:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62493 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62493 ']' 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62493 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62493 00:11:10.704 killing process with pid 62493 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62493' 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62493 00:11:10.704 20:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62493 00:11:14.061 20:39:08 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:14.061 20:39:08 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:14.061 20:39:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:14.061 20:39:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.061 20:39:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:14.061 ************************************ 00:11:14.061 START TEST bdev_hello_world 00:11:14.061 ************************************ 00:11:14.061 20:39:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:14.061 [2024-11-26 20:39:08.663705] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:14.061 [2024-11-26 20:39:08.663942] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63151 ] 00:11:14.061 [2024-11-26 20:39:08.870512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.061 [2024-11-26 20:39:09.035039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.997 [2024-11-26 20:39:09.801440] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:14.997 [2024-11-26 20:39:09.801532] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:14.997 [2024-11-26 20:39:09.801577] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:14.997 [2024-11-26 20:39:09.806595] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:14.997 [2024-11-26 20:39:09.807269] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:14.997 [2024-11-26 20:39:09.807318] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:14.997 [2024-11-26 20:39:09.807600] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:14.997 00:11:14.997 [2024-11-26 20:39:09.807657] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:16.373 00:11:16.373 real 0m2.737s 00:11:16.373 user 0m2.224s 00:11:16.373 sys 0m0.399s 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:16.373 ************************************ 00:11:16.373 END TEST bdev_hello_world 00:11:16.373 ************************************ 00:11:16.373 20:39:11 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:16.373 20:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.373 20:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.373 20:39:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.373 ************************************ 00:11:16.373 START TEST bdev_bounds 00:11:16.373 ************************************ 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63198 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:16.373 Process bdevio pid: 63198 00:11:16.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63198' 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63198 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63198 ']' 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.373 20:39:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:16.632 [2024-11-26 20:39:11.405094] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:16.632 [2024-11-26 20:39:11.406464] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63198 ] 00:11:16.632 [2024-11-26 20:39:11.616356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.891 [2024-11-26 20:39:11.783904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.891 [2024-11-26 20:39:11.784025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.891 [2024-11-26 20:39:11.784051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.827 20:39:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.827 20:39:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:17.827 20:39:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:17.827 I/O targets: 00:11:17.827 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:17.827 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:17.827 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:17.828 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:17.828 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:17.828 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:17.828 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:17.828 00:11:17.828 00:11:17.828 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.828 http://cunit.sourceforge.net/ 00:11:17.828 00:11:17.828 00:11:17.828 Suite: bdevio tests on: Nvme3n1 00:11:17.828 Test: blockdev write read block ...passed 00:11:17.828 Test: blockdev write zeroes read block ...passed 00:11:17.828 Test: blockdev write zeroes read no split ...passed 00:11:18.087 Test: blockdev write zeroes read split ...passed 00:11:18.087 Test: blockdev write zeroes read split partial ...passed 00:11:18.087 Test: blockdev reset ...[2024-11-26 20:39:12.880243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:18.087 passed 00:11:18.087 Test: blockdev write read 8 blocks ...[2024-11-26 20:39:12.884861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:18.087 passed 00:11:18.087 Test: blockdev write read size > 128k ...passed 00:11:18.087 Test: blockdev write read invalid size ...passed 00:11:18.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.087 Test: blockdev write read max offset ...passed 00:11:18.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.087 Test: blockdev writev readv 8 blocks ...passed 00:11:18.087 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.087 Test: blockdev writev readv block ...passed 00:11:18.087 Test: blockdev writev readv size > 128k ...passed 00:11:18.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.087 Test: blockdev comparev and writev ...[2024-11-26 20:39:12.893924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0004000 len:0x1000 00:11:18.087 [2024-11-26 20:39:12.893994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.087 passed 00:11:18.087 Test: blockdev nvme passthru rw ...passed 00:11:18.087 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:39:12.894906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:11:18.087 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:18.087 [2024-11-26 20:39:12.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:18.087 passed 00:11:18.087 Test: blockdev copy ...passed 00:11:18.087 Suite: bdevio tests on: Nvme2n3 00:11:18.087 Test: blockdev write read block ...passed 00:11:18.087 Test: blockdev write zeroes read block ...passed 00:11:18.087 Test: blockdev write zeroes read no split ...passed 00:11:18.087 Test: blockdev write zeroes read split ...passed 00:11:18.087 Test: blockdev write zeroes read split partial ...passed 00:11:18.087 Test: blockdev reset ...[2024-11-26 20:39:12.979001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:18.087 [2024-11-26 20:39:12.984430] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:18.087 passed 00:11:18.087 Test: blockdev write read 8 blocks ...passed 00:11:18.087 Test: blockdev write read size > 128k ...passed 00:11:18.087 Test: blockdev write read invalid size ...passed 00:11:18.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.087 Test: blockdev write read max offset ...passed 00:11:18.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.087 Test: blockdev writev readv 8 blocks ...passed 00:11:18.087 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.087 Test: blockdev writev readv block ...passed 00:11:18.087 Test: blockdev writev readv size > 128k ...passed 00:11:18.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.087 Test: blockdev comparev and writev ...[2024-11-26 20:39:12.994759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0002000 len:0x1000 00:11:18.087 [2024-11-26 20:39:12.994969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.087 00:11:18.087 Test: blockdev nvme passthru rw ...passed 00:11:18.087 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:39:12.996448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:18.087 [2024-11-26 20:39:12.996632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:18.087 passed 00:11:18.087 Test: blockdev nvme admin passthru ...passed 00:11:18.087 Test: blockdev copy ...passed 00:11:18.087 Suite: bdevio tests on: Nvme2n2 00:11:18.087 Test: blockdev write read block ...passed 00:11:18.087 Test: blockdev write zeroes read block ...passed 00:11:18.087 Test: blockdev write zeroes read no split ...passed 00:11:18.087 Test: blockdev write zeroes read split ...passed 00:11:18.088 Test: blockdev write zeroes read split partial ...passed 00:11:18.347 Test: blockdev reset ...[2024-11-26 20:39:13.079800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:18.347 passed 00:11:18.348 Test: blockdev write read 8 blocks ...[2024-11-26 20:39:13.085098] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:18.348 passed 00:11:18.348 Test: blockdev write read size > 128k ...passed 00:11:18.348 Test: blockdev write read invalid size ...passed 00:11:18.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.348 Test: blockdev write read max offset ...passed 00:11:18.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.348 Test: blockdev writev readv 8 blocks ...passed 00:11:18.348 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.348 Test: blockdev writev readv block ...passed 00:11:18.348 Test: blockdev writev readv size > 128k ...passed 00:11:18.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.348 Test: blockdev comparev and writev ...[2024-11-26 20:39:13.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4638000 len:0x1000 00:11:18.348 [2024-11-26 20:39:13.094318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.348 passed 00:11:18.348 Test: blockdev nvme passthru rw ...passed 00:11:18.348 Test: blockdev nvme passthru vendor specific ...passed 00:11:18.348 Test: blockdev nvme admin passthru ...[2024-11-26 20:39:13.095204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:18.348 [2024-11-26 20:39:13.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:18.348 passed 00:11:18.348 Test: blockdev copy ...passed 00:11:18.348 Suite: bdevio tests on: Nvme2n1 00:11:18.348 Test: blockdev write read block ...passed 00:11:18.348 Test: blockdev write zeroes read block ...passed 00:11:18.348 Test: blockdev write zeroes read no split ...passed 00:11:18.348 Test: blockdev write zeroes read split ...passed 00:11:18.348 Test: blockdev write zeroes read split partial ...passed 00:11:18.348 Test: blockdev reset ...[2024-11-26 20:39:13.177636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:18.348 [2024-11-26 20:39:13.182740] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:11:18.348 Test: blockdev write read 8 blocks ...uccessful. 00:11:18.348 passed 00:11:18.348 Test: blockdev write read size > 128k ...passed 00:11:18.348 Test: blockdev write read invalid size ...passed 00:11:18.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.348 Test: blockdev write read max offset ...passed 00:11:18.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.348 Test: blockdev writev readv 8 blocks ...passed 00:11:18.348 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.348 Test: blockdev writev readv block ...passed 00:11:18.348 Test: blockdev writev readv size > 128k ...passed 00:11:18.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.348 Test: blockdev comparev and writev ...[2024-11-26 20:39:13.192232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4634000 len:0x1000 00:11:18.348 [2024-11-26 20:39:13.192296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.348 passed 00:11:18.348 Test: blockdev nvme passthru rw ...passed 00:11:18.348 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:39:13.193213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:11:18.348 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:18.348 [2024-11-26 20:39:13.193370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:18.348 passed 00:11:18.348 Test: blockdev copy ...passed 00:11:18.348 Suite: bdevio tests on: Nvme1n1p2 00:11:18.348 Test: blockdev write read block ...passed 00:11:18.348 Test: blockdev write zeroes read block ...passed 00:11:18.348 Test: blockdev write zeroes read no split ...passed 00:11:18.348 Test: blockdev write zeroes read split ...passed 00:11:18.348 Test: blockdev write zeroes read split partial ...passed 00:11:18.348 Test: blockdev reset ...[2024-11-26 20:39:13.278005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:18.348 [2024-11-26 20:39:13.282626] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:18.348 passed 00:11:18.348 Test: blockdev write read 8 blocks ...passed 00:11:18.348 Test: blockdev write read size > 128k ...passed 00:11:18.348 Test: blockdev write read invalid size ...passed 00:11:18.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.348 Test: blockdev write read max offset ...passed 00:11:18.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.348 Test: blockdev writev readv 8 blocks ...passed 00:11:18.348 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.348 Test: blockdev writev readv block ...passed 00:11:18.348 Test: blockdev writev readv size > 128k ...passed 00:11:18.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.348 Test: blockdev comparev and writev ...[2024-11-26 20:39:13.293084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:11:18.348 Test: blockdev nvme passthru rw ...passed 00:11:18.348 Test: blockdev nvme passthru vendor specific ...passed 00:11:18.348 Test: blockdev nvme admin passthru ...passed 00:11:18.348 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2c4630000 len:0x1000 00:11:18.348 [2024-11-26 20:39:13.293280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.348 passed 00:11:18.348 Suite: bdevio tests on: Nvme1n1p1 00:11:18.348 Test: blockdev write read block ...passed 00:11:18.348 Test: blockdev write zeroes read block ...passed 00:11:18.348 Test: blockdev write zeroes read no split ...passed 00:11:18.348 Test: blockdev write zeroes read split ...passed 00:11:18.607 Test: blockdev write zeroes read split partial ...passed 00:11:18.607 Test: blockdev reset ...[2024-11-26 20:39:13.377129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:18.607 [2024-11-26 20:39:13.382037] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:18.607 passed 00:11:18.607 Test: blockdev write read 8 blocks ...passed 00:11:18.607 Test: blockdev write read size > 128k ...passed 00:11:18.607 Test: blockdev write read invalid size ...passed 00:11:18.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.607 Test: blockdev write read max offset ...passed 00:11:18.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.607 Test: blockdev writev readv 8 blocks ...passed 00:11:18.607 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.607 Test: blockdev writev readv block ...passed 00:11:18.607 Test: blockdev writev readv size > 128k ...passed 00:11:18.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.607 Test: blockdev comparev and writev ...[2024-11-26 20:39:13.391916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:11:18.607 Test: blockdev nvme passthru rw ...passed 00:11:18.607 Test: blockdev nvme passthru vendor specific ...passed 00:11:18.607 Test: blockdev nvme admin passthru ...passed 00:11:18.607 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b0a0e000 len:0x1000 00:11:18.607 [2024-11-26 20:39:13.392098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:18.607 passed 00:11:18.607 Suite: bdevio tests on: Nvme0n1 00:11:18.607 Test: blockdev write read block ...passed 00:11:18.607 Test: blockdev write zeroes read block ...passed 00:11:18.607 Test: blockdev write zeroes read no split ...passed 00:11:18.607 Test: blockdev write zeroes read split ...passed 00:11:18.607 Test: blockdev write zeroes read split partial ...passed 00:11:18.607 Test: blockdev reset ...[2024-11-26 20:39:13.473264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:18.607 passed 00:11:18.607 Test: blockdev write read 8 blocks ...[2024-11-26 20:39:13.477885] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:18.607 passed 00:11:18.607 Test: blockdev write read size > 128k ...passed 00:11:18.607 Test: blockdev write read invalid size ...passed 00:11:18.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:18.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:18.607 Test: blockdev write read max offset ...passed 00:11:18.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:18.607 Test: blockdev writev readv 8 blocks ...passed 00:11:18.607 Test: blockdev writev readv 30 x 1block ...passed 00:11:18.607 Test: blockdev writev readv block ...passed 00:11:18.607 Test: blockdev writev readv size > 128k ...passed 00:11:18.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:18.607 Test: blockdev comparev and writev ...passed 00:11:18.607 Test: blockdev nvme passthru rw ...[2024-11-26 20:39:13.486085] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:18.607 separate metadata which is not supported yet. 00:11:18.607 passed 00:11:18.607 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:39:13.486672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:18.607 passed 00:11:18.607 Test: blockdev nvme admin passthru ...[2024-11-26 20:39:13.486724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:18.607 passed 00:11:18.607 Test: blockdev copy ...passed 00:11:18.607 00:11:18.607 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.607 suites 7 7 n/a 0 0 00:11:18.607 tests 161 161 161 0 0 00:11:18.607 asserts 1025 1025 1025 0 n/a 00:11:18.607 00:11:18.607 Elapsed time = 1.921 seconds 00:11:18.607 0 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63198 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63198 ']' 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63198 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63198 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63198' 00:11:18.607 killing process with pid 63198 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63198 00:11:18.607 20:39:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63198 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:19.985 00:11:19.985 real 0m3.580s 00:11:19.985 user 0m9.217s 00:11:19.985 sys 0m0.635s 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.985 ************************************ 00:11:19.985 END TEST bdev_bounds 00:11:19.985 ************************************ 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:19.985 20:39:14 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:19.985 20:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:19.985 20:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.985 20:39:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.985 ************************************ 00:11:19.985 START TEST bdev_nbd 00:11:19.985 ************************************ 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63269 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63269 /var/tmp/spdk-nbd.sock 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63269 ']' 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.985 20:39:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:20.245 [2024-11-26 20:39:15.038037] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:20.245 [2024-11-26 20:39:15.038178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.245 [2024-11-26 20:39:15.217996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.503 [2024-11-26 20:39:15.381035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:21.440 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.699 1+0 records in 00:11:21.699 1+0 records out 00:11:21.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578171 s, 7.1 MB/s 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:21.699 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.957 1+0 records in 00:11:21.957 1+0 records out 00:11:21.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488389 s, 8.4 MB/s 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:21.957 20:39:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.524 1+0 records in 00:11:22.524 1+0 records out 00:11:22.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617911 s, 6.6 MB/s 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.524 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:22.784 1+0 records in 00:11:22.784 1+0 records out 00:11:22.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619249 s, 6.6 MB/s 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:22.784 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.043 1+0 records in 00:11:23.043 1+0 records out 00:11:23.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766379 s, 5.3 MB/s 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.043 20:39:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.313 1+0 records in 00:11:23.313 1+0 records out 00:11:23.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105912 s, 3.9 MB/s 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.313 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.573 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.832 1+0 records in 00:11:23.832 1+0 records out 00:11:23.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646167 s, 6.3 MB/s 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd0", 00:11:23.832 "bdev_name": "Nvme0n1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd1", 00:11:23.832 "bdev_name": "Nvme1n1p1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd2", 00:11:23.832 "bdev_name": "Nvme1n1p2" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd3", 00:11:23.832 "bdev_name": "Nvme2n1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd4", 00:11:23.832 "bdev_name": "Nvme2n2" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd5", 00:11:23.832 "bdev_name": "Nvme2n3" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd6", 00:11:23.832 "bdev_name": "Nvme3n1" 00:11:23.832 } 00:11:23.832 ]' 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:23.832 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd0", 00:11:23.832 "bdev_name": "Nvme0n1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd1", 00:11:23.832 "bdev_name": "Nvme1n1p1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd2", 00:11:23.832 "bdev_name": "Nvme1n1p2" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd3", 00:11:23.832 "bdev_name": "Nvme2n1" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd4", 00:11:23.832 "bdev_name": "Nvme2n2" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd5", 00:11:23.832 "bdev_name": "Nvme2n3" 00:11:23.832 }, 00:11:23.832 { 00:11:23.832 "nbd_device": "/dev/nbd6", 00:11:23.832 "bdev_name": "Nvme3n1" 00:11:23.832 } 00:11:23.832 ]' 00:11:24.091 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.092 20:39:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.356 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.616 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.874 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.875 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.875 20:39:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.134 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.393 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.960 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:26.218 20:39:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.218 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:26.478 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.479 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:26.738 /dev/nbd0 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.738 1+0 records in 00:11:26.738 1+0 records out 00:11:26.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488858 s, 8.4 MB/s 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.738 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:26.997 /dev/nbd1 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.997 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.998 1+0 records in 00:11:26.998 1+0 records out 00:11:26.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608394 s, 6.7 MB/s 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:26.998 20:39:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:27.566 /dev/nbd10 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.566 1+0 records in 00:11:27.566 1+0 records out 00:11:27.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000852811 s, 4.8 MB/s 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.566 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:27.824 /dev/nbd11 00:11:27.824 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:27.824 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.825 1+0 records in 00:11:27.825 1+0 records out 00:11:27.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000899908 s, 4.6 MB/s 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:27.825 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:28.083 /dev/nbd12 00:11:28.083 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:28.083 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:28.083 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:28.083 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.084 1+0 records in 00:11:28.084 1+0 records out 00:11:28.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551712 s, 7.4 MB/s 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:28.084 20:39:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:28.342 /dev/nbd13 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.342 1+0 records in 00:11:28.342 1+0 records out 00:11:28.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697271 s, 5.9 MB/s 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:28.342 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:28.602 /dev/nbd14 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.602 1+0 records in 00:11:28.602 1+0 records out 00:11:28.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013494 s, 3.0 MB/s 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.602 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.861 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd0", 00:11:28.861 "bdev_name": "Nvme0n1" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd1", 00:11:28.861 "bdev_name": "Nvme1n1p1" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd10", 00:11:28.861 "bdev_name": "Nvme1n1p2" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd11", 00:11:28.861 "bdev_name": "Nvme2n1" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd12", 00:11:28.861 "bdev_name": "Nvme2n2" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd13", 00:11:28.861 "bdev_name": "Nvme2n3" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd14", 00:11:28.861 "bdev_name": "Nvme3n1" 00:11:28.861 } 00:11:28.861 ]' 00:11:28.861 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd0", 00:11:28.861 "bdev_name": "Nvme0n1" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd1", 00:11:28.861 "bdev_name": "Nvme1n1p1" 00:11:28.861 }, 00:11:28.861 { 00:11:28.861 "nbd_device": "/dev/nbd10", 00:11:28.861 "bdev_name": "Nvme1n1p2" 00:11:28.861 }, 00:11:28.861 { 00:11:28.862 "nbd_device": "/dev/nbd11", 00:11:28.862 "bdev_name": "Nvme2n1" 00:11:28.862 }, 00:11:28.862 { 00:11:28.862 "nbd_device": "/dev/nbd12", 00:11:28.862 "bdev_name": "Nvme2n2" 00:11:28.862 }, 00:11:28.862 { 00:11:28.862 "nbd_device": "/dev/nbd13", 00:11:28.862 "bdev_name": "Nvme2n3" 00:11:28.862 }, 00:11:28.862 { 00:11:28.862 "nbd_device": "/dev/nbd14", 00:11:28.862 "bdev_name": "Nvme3n1" 00:11:28.862 } 00:11:28.862 ]' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:28.862 /dev/nbd1 00:11:28.862 /dev/nbd10 00:11:28.862 /dev/nbd11 00:11:28.862 /dev/nbd12 00:11:28.862 /dev/nbd13 00:11:28.862 /dev/nbd14' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:28.862 /dev/nbd1 00:11:28.862 /dev/nbd10 00:11:28.862 /dev/nbd11 00:11:28.862 /dev/nbd12 00:11:28.862 /dev/nbd13 00:11:28.862 /dev/nbd14' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:28.862 256+0 records in 00:11:28.862 256+0 records out 00:11:28.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112991 s, 92.8 MB/s 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.862 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:29.121 256+0 records in 00:11:29.121 256+0 records out 00:11:29.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160971 s, 6.5 MB/s 00:11:29.121 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.121 20:39:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.121 256+0 records in 00:11:29.121 256+0 records out 00:11:29.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158124 s, 6.6 MB/s 00:11:29.121 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.121 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:29.380 256+0 records in 00:11:29.380 256+0 records out 00:11:29.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156836 s, 6.7 MB/s 00:11:29.380 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.380 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:29.640 256+0 records in 00:11:29.640 256+0 records out 00:11:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151612 s, 6.9 MB/s 00:11:29.640 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.640 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:29.640 256+0 records in 00:11:29.640 256+0 records out 00:11:29.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146212 s, 7.2 MB/s 00:11:29.640 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.640 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:29.900 256+0 records in 00:11:29.900 256+0 records out 00:11:29.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152701 s, 6.9 MB/s 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:29.900 256+0 records in 00:11:29.900 256+0 records out 00:11:29.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149697 s, 7.0 MB/s 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:29.900 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.174 20:39:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.433 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.691 20:39:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.258 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.518 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.777 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.035 20:39:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.294 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:32.554 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:32.812 malloc_lvol_verify 00:11:32.812 20:39:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:33.380 c512ef62-e615-4581-bbbd-bbddd2b7fdbf 00:11:33.380 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:33.639 65020b67-e8b9-4021-b0a4-a5156091d0cd 00:11:33.639 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:33.898 /dev/nbd0 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:33.898 mke2fs 1.47.0 (5-Feb-2023) 00:11:33.898 Discarding device blocks: 0/4096 done 00:11:33.898 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:33.898 00:11:33.898 Allocating group tables: 0/1 done 00:11:33.898 Writing inode tables: 0/1 done 00:11:33.898 Creating journal (1024 blocks): done 00:11:33.898 Writing superblocks and filesystem accounting information: 0/1 done 00:11:33.898 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.898 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63269 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63269 ']' 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63269 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.156 20:39:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63269 00:11:34.156 20:39:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.156 killing process with pid 63269 00:11:34.156 20:39:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.156 20:39:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63269' 00:11:34.156 20:39:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63269 00:11:34.156 20:39:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63269 00:11:35.533 20:39:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:35.533 00:11:35.533 real 0m15.540s 00:11:35.533 user 0m20.682s 00:11:35.533 sys 0m6.395s 00:11:35.533 20:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.533 20:39:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:35.533 ************************************ 00:11:35.533 END TEST bdev_nbd 00:11:35.533 ************************************ 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:35.792 skipping fio tests on NVMe due to multi-ns failures. 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:35.792 20:39:30 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:35.792 20:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:35.792 20:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.792 20:39:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:35.792 ************************************ 00:11:35.792 START TEST bdev_verify 00:11:35.792 ************************************ 00:11:35.792 20:39:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:35.792 [2024-11-26 20:39:30.665455] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:35.792 [2024-11-26 20:39:30.665657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63724 ] 00:11:36.050 [2024-11-26 20:39:30.872444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.309 [2024-11-26 20:39:31.050211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.309 [2024-11-26 20:39:31.050229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.907 Running I/O for 5 seconds... 00:11:39.221 15808.00 IOPS, 61.75 MiB/s [2024-11-26T20:39:35.151Z] 16000.00 IOPS, 62.50 MiB/s [2024-11-26T20:39:36.527Z] 16981.33 IOPS, 66.33 MiB/s [2024-11-26T20:39:37.095Z] 17088.00 IOPS, 66.75 MiB/s [2024-11-26T20:39:37.095Z] 17228.80 IOPS, 67.30 MiB/s 00:11:42.101 Latency(us) 00:11:42.101 [2024-11-26T20:39:37.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.101 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0xbd0bd 00:11:42.101 Nvme0n1 : 5.09 1245.99 4.87 0.00 0.00 102070.17 10236.10 100863.02 00:11:42.101 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:42.101 Nvme0n1 : 5.08 1170.80 4.57 0.00 0.00 108588.58 14917.24 116342.00 00:11:42.101 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x4ff80 00:11:42.101 Nvme1n1p1 : 5.11 1253.50 4.90 0.00 0.00 101664.12 15416.56 98865.74 00:11:42.101 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:42.101 Nvme1n1p1 : 5.10 1178.69 4.60 0.00 0.00 107793.54 16477.62 111348.78 00:11:42.101 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x4ff7f 00:11:42.101 Nvme1n1p2 : 5.11 1253.12 4.90 0.00 0.00 101574.43 15042.07 97367.77 00:11:42.101 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:42.101 Nvme1n1p2 : 5.11 1177.99 4.60 0.00 0.00 107498.11 18100.42 104358.28 00:11:42.101 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x80000 00:11:42.101 Nvme2n1 : 5.11 1252.13 4.89 0.00 0.00 101401.91 17476.27 94871.16 00:11:42.101 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x80000 length 0x80000 00:11:42.101 Nvme2n1 : 5.11 1177.38 4.60 0.00 0.00 107247.75 19348.72 104358.28 00:11:42.101 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x80000 00:11:42.101 Nvme2n2 : 5.11 1251.64 4.89 0.00 0.00 101247.13 17601.10 92374.55 00:11:42.101 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x80000 length 0x80000 00:11:42.101 Nvme2n2 : 5.11 1177.11 4.60 0.00 0.00 107047.68 17850.76 107354.21 00:11:42.101 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x80000 00:11:42.101 Nvme2n3 : 5.12 1251.20 4.89 0.00 0.00 101076.09 17975.59 97367.77 00:11:42.101 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x80000 length 0x80000 00:11:42.101 Nvme2n3 : 5.11 1176.70 4.60 0.00 0.00 106932.92 17850.76 108852.18 00:11:42.101 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x0 length 0x20000 00:11:42.101 Nvme3n1 : 5.12 1250.79 4.89 0.00 0.00 100876.00 14792.41 101362.35 00:11:42.101 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:42.101 Verification LBA range: start 0x20000 length 0x20000 00:11:42.101 Nvme3n1 : 5.11 1176.28 4.59 0.00 0.00 106830.88 17850.76 113845.39 00:11:42.101 [2024-11-26T20:39:37.095Z] =================================================================================================================== 00:11:42.101 [2024-11-26T20:39:37.095Z] Total : 16993.33 66.38 0.00 0.00 104323.48 10236.10 116342.00 00:11:44.003 00:11:44.003 real 0m8.028s 00:11:44.003 user 0m14.685s 00:11:44.003 sys 0m0.362s 00:11:44.003 20:39:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.003 ************************************ 00:11:44.003 END TEST bdev_verify 00:11:44.003 20:39:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 ************************************ 00:11:44.003 20:39:38 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:44.003 20:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:44.003 20:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.003 20:39:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 ************************************ 00:11:44.003 START TEST bdev_verify_big_io 00:11:44.003 ************************************ 00:11:44.003 20:39:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:44.003 [2024-11-26 20:39:38.759529] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:44.003 [2024-11-26 20:39:38.759722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63833 ] 00:11:44.003 [2024-11-26 20:39:38.964344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:44.262 [2024-11-26 20:39:39.136463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.262 [2024-11-26 20:39:39.136465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.196 Running I/O for 5 seconds... 00:11:51.288 2671.00 IOPS, 166.94 MiB/s [2024-11-26T20:39:46.282Z] 4002.50 IOPS, 250.16 MiB/s 00:11:51.288 Latency(us) 00:11:51.288 [2024-11-26T20:39:46.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.288 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0xbd0b 00:11:51.288 Nvme0n1 : 5.73 129.62 8.10 0.00 0.00 954683.66 25964.74 902774.00 00:11:51.288 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:51.288 Nvme0n1 : 5.81 118.48 7.40 0.00 0.00 1046710.07 28835.84 1238318.32 00:11:51.288 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x4ff8 00:11:51.288 Nvme1n1p1 : 5.66 128.49 8.03 0.00 0.00 949058.12 65411.17 1014622.11 00:11:51.288 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:51.288 Nvme1n1p1 : 5.87 118.91 7.43 0.00 0.00 1015858.25 34702.87 1485981.99 00:11:51.288 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x4ff7 00:11:51.288 Nvme1n1p2 : 5.86 84.61 5.29 0.00 0.00 1410520.46 118838.61 1933374.42 00:11:51.288 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:51.288 Nvme1n1p2 : 5.81 124.72 7.80 0.00 0.00 944042.79 43441.01 822882.50 00:11:51.288 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x8000 00:11:51.288 Nvme2n1 : 5.81 128.59 8.04 0.00 0.00 902241.26 84884.72 1070546.16 00:11:51.288 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x8000 length 0x8000 00:11:51.288 Nvme2n1 : 5.87 117.69 7.36 0.00 0.00 971032.97 78393.54 1549895.19 00:11:51.288 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x8000 00:11:51.288 Nvme2n2 : 5.81 133.03 8.31 0.00 0.00 850367.17 73899.64 1070546.16 00:11:51.288 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x8000 length 0x8000 00:11:51.288 Nvme2n2 : 5.87 121.55 7.60 0.00 0.00 926221.50 55175.07 1565873.49 00:11:51.288 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x8000 00:11:51.288 Nvme2n3 : 5.84 142.51 8.91 0.00 0.00 786189.39 23343.30 962692.63 00:11:51.288 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x8000 length 0x8000 00:11:51.288 Nvme2n3 : 5.90 134.29 8.39 0.00 0.00 825027.13 15915.89 1414079.63 00:11:51.288 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x0 length 0x2000 00:11:51.288 Nvme3n1 : 5.86 152.90 9.56 0.00 0.00 719323.50 8862.96 982665.51 00:11:51.288 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:51.288 Verification LBA range: start 0x2000 length 0x2000 00:11:51.288 Nvme3n1 : 5.91 133.42 8.34 0.00 0.00 812110.25 12483.05 1613808.40 00:11:51.288 [2024-11-26T20:39:46.282Z] =================================================================================================================== 00:11:51.288 [2024-11-26T20:39:46.282Z] Total : 1768.83 110.55 0.00 0.00 918337.38 8862.96 1933374.42 00:11:53.837 00:11:53.837 real 0m9.652s 00:11:53.837 user 0m17.884s 00:11:53.837 sys 0m0.398s 00:11:53.837 20:39:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.837 ************************************ 00:11:53.837 20:39:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.837 END TEST bdev_verify_big_io 00:11:53.837 ************************************ 00:11:53.837 20:39:48 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.837 20:39:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:53.837 20:39:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.837 20:39:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:53.837 ************************************ 00:11:53.837 START TEST bdev_write_zeroes 00:11:53.837 ************************************ 00:11:53.837 20:39:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:53.837 [2024-11-26 20:39:48.439146] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:53.837 [2024-11-26 20:39:48.439301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63959 ] 00:11:53.837 [2024-11-26 20:39:48.615142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.837 [2024-11-26 20:39:48.744225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.772 Running I/O for 1 seconds... 00:11:55.706 49664.00 IOPS, 194.00 MiB/s 00:11:55.706 Latency(us) 00:11:55.706 [2024-11-26T20:39:50.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.706 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme0n1 : 1.03 7054.76 27.56 0.00 0.00 18095.09 14355.50 33704.23 00:11:55.706 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme1n1p1 : 1.04 7043.40 27.51 0.00 0.00 18094.40 14355.50 33204.91 00:11:55.706 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme1n1p2 : 1.04 7032.13 27.47 0.00 0.00 18043.16 14355.50 32206.26 00:11:55.706 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme2n1 : 1.04 7021.70 27.43 0.00 0.00 17952.73 12358.22 31207.62 00:11:55.706 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme2n2 : 1.04 7011.45 27.39 0.00 0.00 17916.43 10485.76 30458.64 00:11:55.706 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme2n3 : 1.04 7001.05 27.35 0.00 0.00 17879.90 8426.06 31706.94 00:11:55.706 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:55.706 Nvme3n1 : 1.04 6929.49 27.07 0.00 0.00 18017.10 13793.77 33704.23 00:11:55.706 [2024-11-26T20:39:50.700Z] =================================================================================================================== 00:11:55.706 [2024-11-26T20:39:50.700Z] Total : 49093.98 191.77 0.00 0.00 17999.81 8426.06 33704.23 00:11:57.079 00:11:57.079 real 0m3.567s 00:11:57.079 user 0m3.181s 00:11:57.079 sys 0m0.267s 00:11:57.079 20:39:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.079 20:39:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:57.079 ************************************ 00:11:57.079 END TEST bdev_write_zeroes 00:11:57.079 ************************************ 00:11:57.079 20:39:51 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:57.079 20:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:57.079 20:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.079 20:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:57.079 ************************************ 00:11:57.079 START TEST bdev_json_nonenclosed 00:11:57.079 ************************************ 00:11:57.079 20:39:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:57.337 [2024-11-26 20:39:52.096652] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:57.338 [2024-11-26 20:39:52.096836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64012 ] 00:11:57.338 [2024-11-26 20:39:52.296912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.596 [2024-11-26 20:39:52.502028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.596 [2024-11-26 20:39:52.502186] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:57.596 [2024-11-26 20:39:52.502242] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:57.596 [2024-11-26 20:39:52.502272] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:57.854 00:11:57.854 real 0m0.832s 00:11:57.854 user 0m0.544s 00:11:57.854 sys 0m0.180s 00:11:57.854 20:39:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.854 20:39:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:57.854 ************************************ 00:11:57.854 END TEST bdev_json_nonenclosed 00:11:57.854 ************************************ 00:11:58.112 20:39:52 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:58.112 20:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:58.112 20:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.112 20:39:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:58.112 ************************************ 00:11:58.112 START TEST bdev_json_nonarray 00:11:58.112 ************************************ 00:11:58.112 20:39:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:58.112 [2024-11-26 20:39:52.997508] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:58.112 [2024-11-26 20:39:52.997746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64043 ] 00:11:58.370 [2024-11-26 20:39:53.188516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.370 [2024-11-26 20:39:53.315732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.370 [2024-11-26 20:39:53.315850] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:58.370 [2024-11-26 20:39:53.315876] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:58.370 [2024-11-26 20:39:53.315889] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:58.627 00:11:58.627 real 0m0.753s 00:11:58.627 user 0m0.477s 00:11:58.627 sys 0m0.168s 00:11:58.627 20:39:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.627 20:39:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:58.627 ************************************ 00:11:58.627 END TEST bdev_json_nonarray 00:11:58.627 ************************************ 00:11:58.884 20:39:53 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:58.884 20:39:53 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:58.884 20:39:53 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:58.884 20:39:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.884 20:39:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.884 20:39:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:58.884 ************************************ 00:11:58.884 START TEST bdev_gpt_uuid 00:11:58.884 ************************************ 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64073 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64073 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64073 ']' 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.884 20:39:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:58.884 [2024-11-26 20:39:53.833961] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:11:58.884 [2024-11-26 20:39:53.834112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64073 ] 00:11:59.141 [2024-11-26 20:39:54.018304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.399 [2024-11-26 20:39:54.160704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.334 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.334 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:12:00.334 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:00.334 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.334 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:00.593 Some configs were skipped because the RPC state that can call them passed over. 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:12:00.593 { 00:12:00.593 "name": "Nvme1n1p1", 00:12:00.593 "aliases": [ 00:12:00.593 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:00.593 ], 00:12:00.593 "product_name": "GPT Disk", 00:12:00.593 "block_size": 4096, 00:12:00.593 "num_blocks": 655104, 00:12:00.593 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:00.593 "assigned_rate_limits": { 00:12:00.593 "rw_ios_per_sec": 0, 00:12:00.593 "rw_mbytes_per_sec": 0, 00:12:00.593 "r_mbytes_per_sec": 0, 00:12:00.593 "w_mbytes_per_sec": 0 00:12:00.593 }, 00:12:00.593 "claimed": false, 00:12:00.593 "zoned": false, 00:12:00.593 "supported_io_types": { 00:12:00.593 "read": true, 00:12:00.593 "write": true, 00:12:00.593 "unmap": true, 00:12:00.593 "flush": true, 00:12:00.593 "reset": true, 00:12:00.593 "nvme_admin": false, 00:12:00.593 "nvme_io": false, 00:12:00.593 "nvme_io_md": false, 00:12:00.593 "write_zeroes": true, 00:12:00.593 "zcopy": false, 00:12:00.593 "get_zone_info": false, 00:12:00.593 "zone_management": false, 00:12:00.593 "zone_append": false, 00:12:00.593 "compare": true, 00:12:00.593 "compare_and_write": false, 00:12:00.593 "abort": true, 00:12:00.593 "seek_hole": false, 00:12:00.593 "seek_data": false, 00:12:00.593 "copy": true, 00:12:00.593 "nvme_iov_md": false 00:12:00.593 }, 00:12:00.593 "driver_specific": { 00:12:00.593 "gpt": { 00:12:00.593 "base_bdev": "Nvme1n1", 00:12:00.593 "offset_blocks": 256, 00:12:00.593 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:00.593 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:00.593 "partition_name": "SPDK_TEST_first" 00:12:00.593 } 00:12:00.593 } 00:12:00.593 } 00:12:00.593 ]' 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:12:00.593 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:12:00.852 { 00:12:00.852 "name": "Nvme1n1p2", 00:12:00.852 "aliases": [ 00:12:00.852 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:00.852 ], 00:12:00.852 "product_name": "GPT Disk", 00:12:00.852 "block_size": 4096, 00:12:00.852 "num_blocks": 655103, 00:12:00.852 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:00.852 "assigned_rate_limits": { 00:12:00.852 "rw_ios_per_sec": 0, 00:12:00.852 "rw_mbytes_per_sec": 0, 00:12:00.852 "r_mbytes_per_sec": 0, 00:12:00.852 "w_mbytes_per_sec": 0 00:12:00.852 }, 00:12:00.852 "claimed": false, 00:12:00.852 "zoned": false, 00:12:00.852 "supported_io_types": { 00:12:00.852 "read": true, 00:12:00.852 "write": true, 00:12:00.852 "unmap": true, 00:12:00.852 "flush": true, 00:12:00.852 "reset": true, 00:12:00.852 "nvme_admin": false, 00:12:00.852 "nvme_io": false, 00:12:00.852 "nvme_io_md": false, 00:12:00.852 "write_zeroes": true, 00:12:00.852 "zcopy": false, 00:12:00.852 "get_zone_info": false, 00:12:00.852 "zone_management": false, 00:12:00.852 "zone_append": false, 00:12:00.852 "compare": true, 00:12:00.852 "compare_and_write": false, 00:12:00.852 "abort": true, 00:12:00.852 "seek_hole": false, 00:12:00.852 "seek_data": false, 00:12:00.852 "copy": true, 00:12:00.852 "nvme_iov_md": false 00:12:00.852 }, 00:12:00.852 "driver_specific": { 00:12:00.852 "gpt": { 00:12:00.852 "base_bdev": "Nvme1n1", 00:12:00.852 "offset_blocks": 655360, 00:12:00.852 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:00.852 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:00.852 "partition_name": "SPDK_TEST_second" 00:12:00.852 } 00:12:00.852 } 00:12:00.852 } 00:12:00.852 ]' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 64073 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64073 ']' 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64073 00:12:00.852 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64073 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.112 killing process with pid 64073 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64073' 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64073 00:12:01.112 20:39:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64073 00:12:03.643 00:12:03.643 real 0m4.849s 00:12:03.643 user 0m5.099s 00:12:03.643 sys 0m0.588s 00:12:03.643 20:39:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.643 20:39:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.643 ************************************ 00:12:03.643 END TEST bdev_gpt_uuid 00:12:03.643 ************************************ 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:03.643 20:39:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.467 Waiting for block devices as requested 00:12:04.467 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.725 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.725 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.027 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.027 20:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:10.027 20:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:10.284 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:10.284 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:10.284 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:10.284 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:10.284 20:40:05 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:10.284 00:12:10.284 real 1m11.613s 00:12:10.284 user 1m30.466s 00:12:10.285 sys 0m13.438s 00:12:10.285 20:40:05 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.285 20:40:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:10.285 ************************************ 00:12:10.285 END TEST blockdev_nvme_gpt 00:12:10.285 ************************************ 00:12:10.285 20:40:05 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:10.285 20:40:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.285 20:40:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.285 20:40:05 -- common/autotest_common.sh@10 -- # set +x 00:12:10.285 ************************************ 00:12:10.285 START TEST nvme 00:12:10.285 ************************************ 00:12:10.285 20:40:05 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:10.285 * Looking for test storage... 00:12:10.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:10.285 20:40:05 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:10.285 20:40:05 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:10.285 20:40:05 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:10.285 20:40:05 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:10.285 20:40:05 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:10.285 20:40:05 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:10.285 20:40:05 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:10.285 20:40:05 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:10.285 20:40:05 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:10.285 20:40:05 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:10.285 20:40:05 nvme -- scripts/common.sh@345 -- # : 1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:10.285 20:40:05 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:10.285 20:40:05 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@353 -- # local d=1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:10.285 20:40:05 nvme -- scripts/common.sh@355 -- # echo 1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:10.285 20:40:05 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@353 -- # local d=2 00:12:10.285 20:40:05 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:10.285 20:40:05 nvme -- scripts/common.sh@355 -- # echo 2 00:12:10.544 20:40:05 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:10.544 20:40:05 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:10.544 20:40:05 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:10.544 20:40:05 nvme -- scripts/common.sh@368 -- # return 0 00:12:10.544 20:40:05 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:10.544 20:40:05 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.544 --rc genhtml_branch_coverage=1 00:12:10.544 --rc genhtml_function_coverage=1 00:12:10.544 --rc genhtml_legend=1 00:12:10.544 --rc geninfo_all_blocks=1 00:12:10.544 --rc geninfo_unexecuted_blocks=1 00:12:10.544 00:12:10.544 ' 00:12:10.544 20:40:05 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.544 --rc genhtml_branch_coverage=1 00:12:10.544 --rc genhtml_function_coverage=1 00:12:10.544 --rc genhtml_legend=1 00:12:10.544 --rc geninfo_all_blocks=1 00:12:10.544 --rc geninfo_unexecuted_blocks=1 00:12:10.544 00:12:10.544 ' 00:12:10.544 20:40:05 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.544 --rc genhtml_branch_coverage=1 00:12:10.544 --rc genhtml_function_coverage=1 00:12:10.544 --rc genhtml_legend=1 00:12:10.544 --rc geninfo_all_blocks=1 00:12:10.544 --rc geninfo_unexecuted_blocks=1 00:12:10.544 00:12:10.544 ' 00:12:10.544 20:40:05 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:10.544 --rc genhtml_branch_coverage=1 00:12:10.544 --rc genhtml_function_coverage=1 00:12:10.544 --rc genhtml_legend=1 00:12:10.544 --rc geninfo_all_blocks=1 00:12:10.544 --rc geninfo_unexecuted_blocks=1 00:12:10.544 00:12:10.544 ' 00:12:10.544 20:40:05 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:11.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.677 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.677 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.677 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.935 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.935 20:40:06 nvme -- nvme/nvme.sh@79 -- # uname 00:12:11.935 20:40:06 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:11.935 20:40:06 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:11.935 20:40:06 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1075 -- # stubpid=64736 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:11.935 Waiting for stub to ready for secondary processes... 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64736 ]] 00:12:11.935 20:40:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:11.935 [2024-11-26 20:40:06.897997] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:12:11.935 [2024-11-26 20:40:06.898231] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:12.870 20:40:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:12.870 20:40:07 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64736 ]] 00:12:12.870 20:40:07 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:13.805 [2024-11-26 20:40:08.753157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:14.063 20:40:08 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:14.063 20:40:08 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64736 ]] 00:12:14.063 20:40:08 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:14.063 [2024-11-26 20:40:08.879103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.063 [2024-11-26 20:40:08.879313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.063 [2024-11-26 20:40:08.879391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.063 [2024-11-26 20:40:08.899403] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:14.063 [2024-11-26 20:40:08.899449] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.063 [2024-11-26 20:40:08.908905] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:14.063 [2024-11-26 20:40:08.909017] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:14.063 [2024-11-26 20:40:08.912075] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.063 [2024-11-26 20:40:08.912269] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:14.063 [2024-11-26 20:40:08.912351] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:14.063 [2024-11-26 20:40:08.915057] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.063 [2024-11-26 20:40:08.915238] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:14.063 [2024-11-26 20:40:08.915318] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:14.063 [2024-11-26 20:40:08.918346] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.063 [2024-11-26 20:40:08.918534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:14.063 [2024-11-26 20:40:08.918608] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:14.063 [2024-11-26 20:40:08.918690] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:14.063 [2024-11-26 20:40:08.918739] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:14.997 20:40:09 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:14.997 done. 00:12:14.997 20:40:09 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:14.997 20:40:09 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:14.997 20:40:09 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:14.997 20:40:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.997 20:40:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.997 ************************************ 00:12:14.997 START TEST nvme_reset 00:12:14.997 ************************************ 00:12:14.997 20:40:09 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:15.254 Initializing NVMe Controllers 00:12:15.254 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:15.254 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:15.254 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:15.254 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:15.254 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:15.254 00:12:15.254 real 0m0.393s 00:12:15.254 user 0m0.150s 00:12:15.254 sys 0m0.187s 00:12:15.254 20:40:10 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.254 20:40:10 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.254 ************************************ 00:12:15.254 END TEST nvme_reset 00:12:15.254 ************************************ 00:12:15.512 20:40:10 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:15.512 20:40:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.512 20:40:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.512 20:40:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.512 ************************************ 00:12:15.512 START TEST nvme_identify 00:12:15.512 ************************************ 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:15.512 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:15.512 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:15.512 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:15.512 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:15.512 20:40:10 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:15.512 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:15.771 [2024-11-26 20:40:10.684859] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64777 terminated unexpected 00:12:15.771 ===================================================== 00:12:15.771 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:15.771 ===================================================== 00:12:15.771 Controller Capabilities/Features 00:12:15.771 ================================ 00:12:15.771 Vendor ID: 1b36 00:12:15.771 Subsystem Vendor ID: 1af4 00:12:15.771 Serial Number: 12340 00:12:15.771 Model Number: QEMU NVMe Ctrl 00:12:15.771 Firmware Version: 8.0.0 00:12:15.771 Recommended Arb Burst: 6 00:12:15.771 IEEE OUI Identifier: 00 54 52 00:12:15.771 Multi-path I/O 00:12:15.771 May have multiple subsystem ports: No 00:12:15.771 May have multiple controllers: No 00:12:15.771 Associated with SR-IOV VF: No 00:12:15.771 Max Data Transfer Size: 524288 00:12:15.771 Max Number of Namespaces: 256 00:12:15.771 Max Number of I/O Queues: 64 00:12:15.771 NVMe Specification Version (VS): 1.4 00:12:15.771 NVMe Specification Version (Identify): 1.4 00:12:15.771 Maximum Queue Entries: 2048 00:12:15.771 Contiguous Queues Required: Yes 00:12:15.771 Arbitration Mechanisms Supported 00:12:15.771 Weighted Round Robin: Not Supported 00:12:15.771 Vendor Specific: Not Supported 00:12:15.771 Reset Timeout: 7500 ms 00:12:15.771 Doorbell Stride: 4 bytes 00:12:15.771 NVM Subsystem Reset: Not Supported 00:12:15.771 Command Sets Supported 00:12:15.771 NVM Command Set: Supported 00:12:15.771 Boot Partition: Not Supported 00:12:15.771 Memory Page Size Minimum: 4096 bytes 00:12:15.771 Memory Page Size Maximum: 65536 bytes 00:12:15.771 Persistent Memory Region: Not Supported 00:12:15.771 Optional Asynchronous Events Supported 00:12:15.771 Namespace Attribute Notices: Supported 00:12:15.771 Firmware Activation Notices: Not Supported 00:12:15.771 ANA Change Notices: Not Supported 00:12:15.771 PLE Aggregate Log Change Notices: Not Supported 00:12:15.771 LBA Status Info Alert Notices: Not Supported 00:12:15.771 EGE Aggregate Log Change Notices: Not Supported 00:12:15.771 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.771 Zone Descriptor Change Notices: Not Supported 00:12:15.771 Discovery Log Change Notices: Not Supported 00:12:15.771 Controller Attributes 00:12:15.771 128-bit Host Identifier: Not Supported 00:12:15.771 Non-Operational Permissive Mode: Not Supported 00:12:15.771 NVM Sets: Not Supported 00:12:15.771 Read Recovery Levels: Not Supported 00:12:15.771 Endurance Groups: Not Supported 00:12:15.771 Predictable Latency Mode: Not Supported 00:12:15.771 Traffic Based Keep ALive: Not Supported 00:12:15.771 Namespace Granularity: Not Supported 00:12:15.771 SQ Associations: Not Supported 00:12:15.771 UUID List: Not Supported 00:12:15.771 Multi-Domain Subsystem: Not Supported 00:12:15.771 Fixed Capacity Management: Not Supported 00:12:15.771 Variable Capacity Management: Not Supported 00:12:15.771 Delete Endurance Group: Not Supported 00:12:15.771 Delete NVM Set: Not Supported 00:12:15.771 Extended LBA Formats Supported: Supported 00:12:15.771 Flexible Data Placement Supported: Not Supported 00:12:15.771 00:12:15.771 Controller Memory Buffer Support 00:12:15.771 ================================ 00:12:15.771 Supported: No 00:12:15.771 00:12:15.771 Persistent Memory Region Support 00:12:15.771 ================================ 00:12:15.771 Supported: No 00:12:15.771 00:12:15.771 Admin Command Set Attributes 00:12:15.771 ============================ 00:12:15.771 Security Send/Receive: Not Supported 00:12:15.771 Format NVM: Supported 00:12:15.771 Firmware Activate/Download: Not Supported 00:12:15.771 Namespace Management: Supported 00:12:15.771 Device Self-Test: Not Supported 00:12:15.771 Directives: Supported 00:12:15.771 NVMe-MI: Not Supported 00:12:15.771 Virtualization Management: Not Supported 00:12:15.771 Doorbell Buffer Config: Supported 00:12:15.771 Get LBA Status Capability: Not Supported 00:12:15.771 Command & Feature Lockdown Capability: Not Supported 00:12:15.771 Abort Command Limit: 4 00:12:15.771 Async Event Request Limit: 4 00:12:15.771 Number of Firmware Slots: N/A 00:12:15.771 Firmware Slot 1 Read-Only: N/A 00:12:15.771 Firmware Activation Without Reset: N/A 00:12:15.771 Multiple Update Detection Support: N/A 00:12:15.771 Firmware Update Granularity: No Information Provided 00:12:15.771 Per-Namespace SMART Log: Yes 00:12:15.771 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.771 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:15.771 Command Effects Log Page: Supported 00:12:15.771 Get Log Page Extended Data: Supported 00:12:15.771 Telemetry Log Pages: Not Supported 00:12:15.771 Persistent Event Log Pages: Not Supported 00:12:15.771 Supported Log Pages Log Page: May Support 00:12:15.771 Commands Supported & Effects Log Page: Not Supported 00:12:15.771 Feature Identifiers & Effects Log Page:May Support 00:12:15.771 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.771 Data Area 4 for Telemetry Log: Not Supported 00:12:15.771 Error Log Page Entries Supported: 1 00:12:15.771 Keep Alive: Not Supported 00:12:15.771 00:12:15.771 NVM Command Set Attributes 00:12:15.771 ========================== 00:12:15.771 Submission Queue Entry Size 00:12:15.771 Max: 64 00:12:15.771 Min: 64 00:12:15.771 Completion Queue Entry Size 00:12:15.771 Max: 16 00:12:15.771 Min: 16 00:12:15.771 Number of Namespaces: 256 00:12:15.771 Compare Command: Supported 00:12:15.771 Write Uncorrectable Command: Not Supported 00:12:15.771 Dataset Management Command: Supported 00:12:15.771 Write Zeroes Command: Supported 00:12:15.771 Set Features Save Field: Supported 00:12:15.771 Reservations: Not Supported 00:12:15.771 Timestamp: Supported 00:12:15.771 Copy: Supported 00:12:15.771 Volatile Write Cache: Present 00:12:15.771 Atomic Write Unit (Normal): 1 00:12:15.771 Atomic Write Unit (PFail): 1 00:12:15.771 Atomic Compare & Write Unit: 1 00:12:15.771 Fused Compare & Write: Not Supported 00:12:15.772 Scatter-Gather List 00:12:15.772 SGL Command Set: Supported 00:12:15.772 SGL Keyed: Not Supported 00:12:15.772 SGL Bit Bucket Descriptor: Not Supported 00:12:15.772 SGL Metadata Pointer: Not Supported 00:12:15.772 Oversized SGL: Not Supported 00:12:15.772 SGL Metadata Address: Not Supported 00:12:15.772 SGL Offset: Not Supported 00:12:15.772 Transport SGL Data Block: Not Supported 00:12:15.772 Replay Protected Memory Block: Not Supported 00:12:15.772 00:12:15.772 Firmware Slot Information 00:12:15.772 ========================= 00:12:15.772 Active slot: 1 00:12:15.772 Slot 1 Firmware Revision: 1.0 00:12:15.772 00:12:15.772 00:12:15.772 Commands Supported and Effects 00:12:15.772 ============================== 00:12:15.772 Admin Commands 00:12:15.772 -------------- 00:12:15.772 Delete I/O Submission Queue (00h): Supported 00:12:15.772 Create I/O Submission Queue (01h): Supported 00:12:15.772 Get Log Page (02h): Supported 00:12:15.772 Delete I/O Completion Queue (04h): Supported 00:12:15.772 Create I/O Completion Queue (05h): Supported 00:12:15.772 Identify (06h): Supported 00:12:15.772 Abort (08h): Supported 00:12:15.772 Set Features (09h): Supported 00:12:15.772 Get Features (0Ah): Supported 00:12:15.772 Asynchronous Event Request (0Ch): Supported 00:12:15.772 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.772 Directive Send (19h): Supported 00:12:15.772 Directive Receive (1Ah): Supported 00:12:15.772 Virtualization Management (1Ch): Supported 00:12:15.772 Doorbell Buffer Config (7Ch): Supported 00:12:15.772 Format NVM (80h): Supported LBA-Change 00:12:15.772 I/O Commands 00:12:15.772 ------------ 00:12:15.772 Flush (00h): Supported LBA-Change 00:12:15.772 Write (01h): Supported LBA-Change 00:12:15.772 Read (02h): Supported 00:12:15.772 Compare (05h): Supported 00:12:15.772 Write Zeroes (08h): Supported LBA-Change 00:12:15.772 Dataset Management (09h): Supported LBA-Change 00:12:15.772 Unknown (0Ch): Supported 00:12:15.772 Unknown (12h): Supported 00:12:15.772 Copy (19h): Supported LBA-Change 00:12:15.772 Unknown (1Dh): Supported LBA-Change 00:12:15.772 00:12:15.772 Error Log 00:12:15.772 ========= 00:12:15.772 00:12:15.772 Arbitration 00:12:15.772 =========== 00:12:15.772 Arbitration Burst: no limit 00:12:15.772 00:12:15.772 Power Management 00:12:15.772 ================ 00:12:15.772 Number of Power States: 1 00:12:15.772 Current Power State: Power State #0 00:12:15.772 Power State #0: 00:12:15.772 Max Power: 25.00 W 00:12:15.772 Non-Operational State: Operational 00:12:15.772 Entry Latency: 16 microseconds 00:12:15.772 Exit Latency: 4 microseconds 00:12:15.772 Relative Read Throughput: 0 00:12:15.772 Relative Read Latency: 0 00:12:15.772 Relative Write Throughput: 0 00:12:15.772 Relative Write Latency: 0 00:12:15.772 Idle Power[2024-11-26 20:40:10.686564] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64777 terminated unexpected 00:12:15.772 : Not Reported 00:12:15.772 Active Power: Not Reported 00:12:15.772 Non-Operational Permissive Mode: Not Supported 00:12:15.772 00:12:15.772 Health Information 00:12:15.772 ================== 00:12:15.772 Critical Warnings: 00:12:15.772 Available Spare Space: OK 00:12:15.772 Temperature: OK 00:12:15.772 Device Reliability: OK 00:12:15.772 Read Only: No 00:12:15.772 Volatile Memory Backup: OK 00:12:15.772 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.772 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.772 Available Spare: 0% 00:12:15.772 Available Spare Threshold: 0% 00:12:15.772 Life Percentage Used: 0% 00:12:15.772 Data Units Read: 601 00:12:15.772 Data Units Written: 529 00:12:15.772 Host Read Commands: 29792 00:12:15.772 Host Write Commands: 29578 00:12:15.772 Controller Busy Time: 0 minutes 00:12:15.772 Power Cycles: 0 00:12:15.772 Power On Hours: 0 hours 00:12:15.772 Unsafe Shutdowns: 0 00:12:15.772 Unrecoverable Media Errors: 0 00:12:15.772 Lifetime Error Log Entries: 0 00:12:15.772 Warning Temperature Time: 0 minutes 00:12:15.772 Critical Temperature Time: 0 minutes 00:12:15.772 00:12:15.772 Number of Queues 00:12:15.772 ================ 00:12:15.772 Number of I/O Submission Queues: 64 00:12:15.772 Number of I/O Completion Queues: 64 00:12:15.772 00:12:15.772 ZNS Specific Controller Data 00:12:15.772 ============================ 00:12:15.772 Zone Append Size Limit: 0 00:12:15.772 00:12:15.772 00:12:15.772 Active Namespaces 00:12:15.772 ================= 00:12:15.772 Namespace ID:1 00:12:15.772 Error Recovery Timeout: Unlimited 00:12:15.772 Command Set Identifier: NVM (00h) 00:12:15.772 Deallocate: Supported 00:12:15.772 Deallocated/Unwritten Error: Supported 00:12:15.772 Deallocated Read Value: All 0x00 00:12:15.772 Deallocate in Write Zeroes: Not Supported 00:12:15.772 Deallocated Guard Field: 0xFFFF 00:12:15.772 Flush: Supported 00:12:15.772 Reservation: Not Supported 00:12:15.772 Metadata Transferred as: Separate Metadata Buffer 00:12:15.772 Namespace Sharing Capabilities: Private 00:12:15.772 Size (in LBAs): 1548666 (5GiB) 00:12:15.772 Capacity (in LBAs): 1548666 (5GiB) 00:12:15.772 Utilization (in LBAs): 1548666 (5GiB) 00:12:15.772 Thin Provisioning: Not Supported 00:12:15.772 Per-NS Atomic Units: No 00:12:15.772 Maximum Single Source Range Length: 128 00:12:15.772 Maximum Copy Length: 128 00:12:15.772 Maximum Source Range Count: 128 00:12:15.772 NGUID/EUI64 Never Reused: No 00:12:15.772 Namespace Write Protected: No 00:12:15.772 Number of LBA Formats: 8 00:12:15.772 Current LBA Format: LBA Format #07 00:12:15.772 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.772 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.772 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.772 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.772 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.772 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.772 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.772 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.772 00:12:15.772 NVM Specific Namespace Data 00:12:15.772 =========================== 00:12:15.772 Logical Block Storage Tag Mask: 0 00:12:15.772 Protection Information Capabilities: 00:12:15.772 16b Guard Protection Information Storage Tag Support: No 00:12:15.772 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.772 Storage Tag Check Read Support: No 00:12:15.772 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.772 ===================================================== 00:12:15.772 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:15.772 ===================================================== 00:12:15.772 Controller Capabilities/Features 00:12:15.772 ================================ 00:12:15.772 Vendor ID: 1b36 00:12:15.772 Subsystem Vendor ID: 1af4 00:12:15.772 Serial Number: 12341 00:12:15.772 Model Number: QEMU NVMe Ctrl 00:12:15.772 Firmware Version: 8.0.0 00:12:15.772 Recommended Arb Burst: 6 00:12:15.772 IEEE OUI Identifier: 00 54 52 00:12:15.772 Multi-path I/O 00:12:15.772 May have multiple subsystem ports: No 00:12:15.772 May have multiple controllers: No 00:12:15.772 Associated with SR-IOV VF: No 00:12:15.772 Max Data Transfer Size: 524288 00:12:15.772 Max Number of Namespaces: 256 00:12:15.772 Max Number of I/O Queues: 64 00:12:15.772 NVMe Specification Version (VS): 1.4 00:12:15.772 NVMe Specification Version (Identify): 1.4 00:12:15.772 Maximum Queue Entries: 2048 00:12:15.772 Contiguous Queues Required: Yes 00:12:15.772 Arbitration Mechanisms Supported 00:12:15.772 Weighted Round Robin: Not Supported 00:12:15.772 Vendor Specific: Not Supported 00:12:15.772 Reset Timeout: 7500 ms 00:12:15.772 Doorbell Stride: 4 bytes 00:12:15.772 NVM Subsystem Reset: Not Supported 00:12:15.772 Command Sets Supported 00:12:15.772 NVM Command Set: Supported 00:12:15.772 Boot Partition: Not Supported 00:12:15.772 Memory Page Size Minimum: 4096 bytes 00:12:15.772 Memory Page Size Maximum: 65536 bytes 00:12:15.772 Persistent Memory Region: Not Supported 00:12:15.772 Optional Asynchronous Events Supported 00:12:15.772 Namespace Attribute Notices: Supported 00:12:15.772 Firmware Activation Notices: Not Supported 00:12:15.772 ANA Change Notices: Not Supported 00:12:15.772 PLE Aggregate Log Change Notices: Not Supported 00:12:15.772 LBA Status Info Alert Notices: Not Supported 00:12:15.772 EGE Aggregate Log Change Notices: Not Supported 00:12:15.772 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.772 Zone Descriptor Change Notices: Not Supported 00:12:15.772 Discovery Log Change Notices: Not Supported 00:12:15.772 Controller Attributes 00:12:15.772 128-bit Host Identifier: Not Supported 00:12:15.772 Non-Operational Permissive Mode: Not Supported 00:12:15.772 NVM Sets: Not Supported 00:12:15.772 Read Recovery Levels: Not Supported 00:12:15.772 Endurance Groups: Not Supported 00:12:15.772 Predictable Latency Mode: Not Supported 00:12:15.772 Traffic Based Keep ALive: Not Supported 00:12:15.772 Namespace Granularity: Not Supported 00:12:15.772 SQ Associations: Not Supported 00:12:15.772 UUID List: Not Supported 00:12:15.772 Multi-Domain Subsystem: Not Supported 00:12:15.772 Fixed Capacity Management: Not Supported 00:12:15.772 Variable Capacity Management: Not Supported 00:12:15.772 Delete Endurance Group: Not Supported 00:12:15.772 Delete NVM Set: Not Supported 00:12:15.772 Extended LBA Formats Supported: Supported 00:12:15.772 Flexible Data Placement Supported: Not Supported 00:12:15.772 00:12:15.772 Controller Memory Buffer Support 00:12:15.772 ================================ 00:12:15.772 Supported: No 00:12:15.772 00:12:15.772 Persistent Memory Region Support 00:12:15.772 ================================ 00:12:15.772 Supported: No 00:12:15.772 00:12:15.772 Admin Command Set Attributes 00:12:15.772 ============================ 00:12:15.772 Security Send/Receive: Not Supported 00:12:15.772 Format NVM: Supported 00:12:15.772 Firmware Activate/Download: Not Supported 00:12:15.772 Namespace Management: Supported 00:12:15.772 Device Self-Test: Not Supported 00:12:15.772 Directives: Supported 00:12:15.772 NVMe-MI: Not Supported 00:12:15.772 Virtualization Management: Not Supported 00:12:15.772 Doorbell Buffer Config: Supported 00:12:15.772 Get LBA Status Capability: Not Supported 00:12:15.772 Command & Feature Lockdown Capability: Not Supported 00:12:15.772 Abort Command Limit: 4 00:12:15.772 Async Event Request Limit: 4 00:12:15.772 Number of Firmware Slots: N/A 00:12:15.772 Firmware Slot 1 Read-Only: N/A 00:12:15.772 Firmware Activation Without Reset: N/A 00:12:15.772 Multiple Update Detection Support: N/A 00:12:15.772 Firmware Update Granularity: No Information Provided 00:12:15.772 Per-Namespace SMART Log: Yes 00:12:15.772 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.772 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:15.772 Command Effects Log Page: Supported 00:12:15.772 Get Log Page Extended Data: Supported 00:12:15.772 Telemetry Log Pages: Not Supported 00:12:15.772 Persistent Event Log Pages: Not Supported 00:12:15.772 Supported Log Pages Log Page: May Support 00:12:15.772 Commands Supported & Effects Log Page: Not Supported 00:12:15.772 Feature Identifiers & Effects Log Page:May Support 00:12:15.772 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.772 Data Area 4 for Telemetry Log: Not Supported 00:12:15.772 Error Log Page Entries Supported: 1 00:12:15.772 Keep Alive: Not Supported 00:12:15.772 00:12:15.772 NVM Command Set Attributes 00:12:15.772 ========================== 00:12:15.773 Submission Queue Entry Size 00:12:15.773 Max: 64 00:12:15.773 Min: 64 00:12:15.773 Completion Queue Entry Size 00:12:15.773 Max: 16 00:12:15.773 Min: 16 00:12:15.773 Number of Namespaces: 256 00:12:15.773 Compare Command: Supported 00:12:15.773 Write Uncorrectable Command: Not Supported 00:12:15.773 Dataset Management Command: Supported 00:12:15.773 Write Zeroes Command: Supported 00:12:15.773 Set Features Save Field: Supported 00:12:15.773 Reservations: Not Supported 00:12:15.773 Timestamp: Supported 00:12:15.773 Copy: Supported 00:12:15.773 Volatile Write Cache: Present 00:12:15.773 Atomic Write Unit (Normal): 1 00:12:15.773 Atomic Write Unit (PFail): 1 00:12:15.773 Atomic Compare & Write Unit: 1 00:12:15.773 Fused Compare & Write: Not Supported 00:12:15.773 Scatter-Gather List 00:12:15.773 SGL Command Set: Supported 00:12:15.773 SGL Keyed: Not Supported 00:12:15.773 SGL Bit Bucket Descriptor: Not Supported 00:12:15.773 SGL Metadata Pointer: Not Supported 00:12:15.773 Oversized SGL: Not Supported 00:12:15.773 SGL Metadata Address: Not Supported 00:12:15.773 SGL Offset: Not Supported 00:12:15.773 Transport SGL Data Block: Not Supported 00:12:15.773 Replay Protected Memory Block: Not Supported 00:12:15.773 00:12:15.773 Firmware Slot Information 00:12:15.773 ========================= 00:12:15.773 Active slot: 1 00:12:15.773 Slot 1 Firmware Revision: 1.0 00:12:15.773 00:12:15.773 00:12:15.773 Commands Supported and Effects 00:12:15.773 ============================== 00:12:15.773 Admin Commands 00:12:15.773 -------------- 00:12:15.773 Delete I/O Submission Queue (00h): Supported 00:12:15.773 Create I/O Submission Queue (01h): Supported 00:12:15.773 Get Log Page (02h): Supported 00:12:15.773 Delete I/O Completion Queue (04h): Supported 00:12:15.773 Create I/O Completion Queue (05h): Supported 00:12:15.773 Identify (06h): Supported 00:12:15.773 Abort (08h): Supported 00:12:15.773 Set Features (09h): Supported 00:12:15.773 Get Features (0Ah): Supported 00:12:15.773 Asynchronous Event Request (0Ch): Supported 00:12:15.773 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.773 Directive Send (19h): Supported 00:12:15.773 Directive Receive (1Ah): Supported 00:12:15.773 Virtualization Management (1Ch): Supported 00:12:15.773 Doorbell Buffer Config (7Ch): Supported 00:12:15.773 Format NVM (80h): Supported LBA-Change 00:12:15.773 I/O Commands 00:12:15.773 ------------ 00:12:15.773 Flush (00h): Supported LBA-Change 00:12:15.773 Write (01h): Supported LBA-Change 00:12:15.773 Read (02h): Supported 00:12:15.773 Compare (05h): Supported 00:12:15.773 Write Zeroes (08h): Supported LBA-Change 00:12:15.773 Dataset Management (09h): Supported LBA-Change 00:12:15.773 Unknown (0Ch): Supported 00:12:15.773 Unknown (12h): Supported 00:12:15.773 Copy (19h): Supported LBA-Change 00:12:15.773 Unknown (1Dh): Supported LBA-Change 00:12:15.773 00:12:15.773 Error Log 00:12:15.773 ========= 00:12:15.773 00:12:15.773 Arbitration 00:12:15.773 =========== 00:12:15.773 Arbitration Burst: no limit 00:12:15.773 00:12:15.773 Power Management 00:12:15.773 ================ 00:12:15.773 Number of Power States: 1 00:12:15.773 Current Power State: Power State #0 00:12:15.773 Power State #0: 00:12:15.773 Max Power: 25.00 W 00:12:15.773 Non-Operational State: Operational 00:12:15.773 Entry Latency: 16 microseconds 00:12:15.773 Exit Latency: 4 microseconds 00:12:15.773 Relative Read Throughput: 0 00:12:15.773 Relative Read Latency: 0 00:12:15.773 Relative Write Throughput: 0 00:12:15.773 Relative Write Latency: 0 00:12:15.773 Idle Power: Not Reported 00:12:15.773 Active Power: Not Reported 00:12:15.773 Non-Operational Permissive Mode: Not Supported 00:12:15.773 00:12:15.773 Health Information 00:12:15.773 ================== 00:12:15.773 Critical Warnings: 00:12:15.773 Available Spare Space: OK 00:12:15.773 Temperature: [2024-11-26 20:40:10.687576] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64777 terminated unexpected 00:12:15.773 OK 00:12:15.773 Device Reliability: OK 00:12:15.773 Read Only: No 00:12:15.773 Volatile Memory Backup: OK 00:12:15.773 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.773 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.773 Available Spare: 0% 00:12:15.773 Available Spare Threshold: 0% 00:12:15.773 Life Percentage Used: 0% 00:12:15.773 Data Units Read: 921 00:12:15.773 Data Units Written: 788 00:12:15.773 Host Read Commands: 44712 00:12:15.773 Host Write Commands: 43486 00:12:15.773 Controller Busy Time: 0 minutes 00:12:15.773 Power Cycles: 0 00:12:15.773 Power On Hours: 0 hours 00:12:15.773 Unsafe Shutdowns: 0 00:12:15.773 Unrecoverable Media Errors: 0 00:12:15.773 Lifetime Error Log Entries: 0 00:12:15.773 Warning Temperature Time: 0 minutes 00:12:15.773 Critical Temperature Time: 0 minutes 00:12:15.773 00:12:15.773 Number of Queues 00:12:15.773 ================ 00:12:15.773 Number of I/O Submission Queues: 64 00:12:15.773 Number of I/O Completion Queues: 64 00:12:15.773 00:12:15.773 ZNS Specific Controller Data 00:12:15.773 ============================ 00:12:15.773 Zone Append Size Limit: 0 00:12:15.773 00:12:15.773 00:12:15.773 Active Namespaces 00:12:15.773 ================= 00:12:15.773 Namespace ID:1 00:12:15.773 Error Recovery Timeout: Unlimited 00:12:15.773 Command Set Identifier: NVM (00h) 00:12:15.773 Deallocate: Supported 00:12:15.773 Deallocated/Unwritten Error: Supported 00:12:15.773 Deallocated Read Value: All 0x00 00:12:15.773 Deallocate in Write Zeroes: Not Supported 00:12:15.773 Deallocated Guard Field: 0xFFFF 00:12:15.773 Flush: Supported 00:12:15.773 Reservation: Not Supported 00:12:15.773 Namespace Sharing Capabilities: Private 00:12:15.773 Size (in LBAs): 1310720 (5GiB) 00:12:15.773 Capacity (in LBAs): 1310720 (5GiB) 00:12:15.773 Utilization (in LBAs): 1310720 (5GiB) 00:12:15.773 Thin Provisioning: Not Supported 00:12:15.773 Per-NS Atomic Units: No 00:12:15.773 Maximum Single Source Range Length: 128 00:12:15.773 Maximum Copy Length: 128 00:12:15.773 Maximum Source Range Count: 128 00:12:15.773 NGUID/EUI64 Never Reused: No 00:12:15.773 Namespace Write Protected: No 00:12:15.773 Number of LBA Formats: 8 00:12:15.773 Current LBA Format: LBA Format #04 00:12:15.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.773 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.773 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.773 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.773 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.773 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.773 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.773 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.773 00:12:15.773 NVM Specific Namespace Data 00:12:15.773 =========================== 00:12:15.773 Logical Block Storage Tag Mask: 0 00:12:15.773 Protection Information Capabilities: 00:12:15.773 16b Guard Protection Information Storage Tag Support: No 00:12:15.773 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.773 Storage Tag Check Read Support: No 00:12:15.773 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.773 ===================================================== 00:12:15.773 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:15.773 ===================================================== 00:12:15.773 Controller Capabilities/Features 00:12:15.773 ================================ 00:12:15.773 Vendor ID: 1b36 00:12:15.773 Subsystem Vendor ID: 1af4 00:12:15.773 Serial Number: 12343 00:12:15.773 Model Number: QEMU NVMe Ctrl 00:12:15.773 Firmware Version: 8.0.0 00:12:15.773 Recommended Arb Burst: 6 00:12:15.773 IEEE OUI Identifier: 00 54 52 00:12:15.773 Multi-path I/O 00:12:15.773 May have multiple subsystem ports: No 00:12:15.773 May have multiple controllers: Yes 00:12:15.773 Associated with SR-IOV VF: No 00:12:15.773 Max Data Transfer Size: 524288 00:12:15.773 Max Number of Namespaces: 256 00:12:15.773 Max Number of I/O Queues: 64 00:12:15.773 NVMe Specification Version (VS): 1.4 00:12:15.773 NVMe Specification Version (Identify): 1.4 00:12:15.773 Maximum Queue Entries: 2048 00:12:15.773 Contiguous Queues Required: Yes 00:12:15.773 Arbitration Mechanisms Supported 00:12:15.773 Weighted Round Robin: Not Supported 00:12:15.773 Vendor Specific: Not Supported 00:12:15.773 Reset Timeout: 7500 ms 00:12:15.773 Doorbell Stride: 4 bytes 00:12:15.773 NVM Subsystem Reset: Not Supported 00:12:15.773 Command Sets Supported 00:12:15.773 NVM Command Set: Supported 00:12:15.773 Boot Partition: Not Supported 00:12:15.773 Memory Page Size Minimum: 4096 bytes 00:12:15.773 Memory Page Size Maximum: 65536 bytes 00:12:15.773 Persistent Memory Region: Not Supported 00:12:15.773 Optional Asynchronous Events Supported 00:12:15.773 Namespace Attribute Notices: Supported 00:12:15.773 Firmware Activation Notices: Not Supported 00:12:15.773 ANA Change Notices: Not Supported 00:12:15.773 PLE Aggregate Log Change Notices: Not Supported 00:12:15.773 LBA Status Info Alert Notices: Not Supported 00:12:15.773 EGE Aggregate Log Change Notices: Not Supported 00:12:15.773 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.773 Zone Descriptor Change Notices: Not Supported 00:12:15.773 Discovery Log Change Notices: Not Supported 00:12:15.773 Controller Attributes 00:12:15.773 128-bit Host Identifier: Not Supported 00:12:15.773 Non-Operational Permissive Mode: Not Supported 00:12:15.773 NVM Sets: Not Supported 00:12:15.773 Read Recovery Levels: Not Supported 00:12:15.773 Endurance Groups: Supported 00:12:15.773 Predictable Latency Mode: Not Supported 00:12:15.773 Traffic Based Keep ALive: Not Supported 00:12:15.773 Namespace Granularity: Not Supported 00:12:15.773 SQ Associations: Not Supported 00:12:15.773 UUID List: Not Supported 00:12:15.773 Multi-Domain Subsystem: Not Supported 00:12:15.773 Fixed Capacity Management: Not Supported 00:12:15.773 Variable Capacity Management: Not Supported 00:12:15.773 Delete Endurance Group: Not Supported 00:12:15.773 Delete NVM Set: Not Supported 00:12:15.773 Extended LBA Formats Supported: Supported 00:12:15.773 Flexible Data Placement Supported: Supported 00:12:15.773 00:12:15.773 Controller Memory Buffer Support 00:12:15.773 ================================ 00:12:15.773 Supported: No 00:12:15.773 00:12:15.773 Persistent Memory Region Support 00:12:15.773 ================================ 00:12:15.773 Supported: No 00:12:15.773 00:12:15.773 Admin Command Set Attributes 00:12:15.773 ============================ 00:12:15.773 Security Send/Receive: Not Supported 00:12:15.773 Format NVM: Supported 00:12:15.773 Firmware Activate/Download: Not Supported 00:12:15.773 Namespace Management: Supported 00:12:15.773 Device Self-Test: Not Supported 00:12:15.773 Directives: Supported 00:12:15.773 NVMe-MI: Not Supported 00:12:15.773 Virtualization Management: Not Supported 00:12:15.773 Doorbell Buffer Config: Supported 00:12:15.774 Get LBA Status Capability: Not Supported 00:12:15.774 Command & Feature Lockdown Capability: Not Supported 00:12:15.774 Abort Command Limit: 4 00:12:15.774 Async Event Request Limit: 4 00:12:15.774 Number of Firmware Slots: N/A 00:12:15.774 Firmware Slot 1 Read-Only: N/A 00:12:15.774 Firmware Activation Without Reset: N/A 00:12:15.774 Multiple Update Detection Support: N/A 00:12:15.774 Firmware Update Granularity: No Information Provided 00:12:15.774 Per-Namespace SMART Log: Yes 00:12:15.774 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.774 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:15.774 Command Effects Log Page: Supported 00:12:15.774 Get Log Page Extended Data: Supported 00:12:15.774 Telemetry Log Pages: Not Supported 00:12:15.774 Persistent Event Log Pages: Not Supported 00:12:15.774 Supported Log Pages Log Page: May Support 00:12:15.774 Commands Supported & Effects Log Page: Not Supported 00:12:15.774 Feature Identifiers & Effects Log Page:May Support 00:12:15.774 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.774 Data Area 4 for Telemetry Log: Not Supported 00:12:15.774 Error Log Page Entries Supported: 1 00:12:15.774 Keep Alive: Not Supported 00:12:15.774 00:12:15.774 NVM Command Set Attributes 00:12:15.774 ========================== 00:12:15.774 Submission Queue Entry Size 00:12:15.774 Max: 64 00:12:15.774 Min: 64 00:12:15.774 Completion Queue Entry Size 00:12:15.774 Max: 16 00:12:15.774 Min: 16 00:12:15.774 Number of Namespaces: 256 00:12:15.774 Compare Command: Supported 00:12:15.774 Write Uncorrectable Command: Not Supported 00:12:15.774 Dataset Management Command: Supported 00:12:15.774 Write Zeroes Command: Supported 00:12:15.774 Set Features Save Field: Supported 00:12:15.774 Reservations: Not Supported 00:12:15.774 Timestamp: Supported 00:12:15.774 Copy: Supported 00:12:15.774 Volatile Write Cache: Present 00:12:15.774 Atomic Write Unit (Normal): 1 00:12:15.774 Atomic Write Unit (PFail): 1 00:12:15.774 Atomic Compare & Write Unit: 1 00:12:15.774 Fused Compare & Write: Not Supported 00:12:15.774 Scatter-Gather List 00:12:15.774 SGL Command Set: Supported 00:12:15.774 SGL Keyed: Not Supported 00:12:15.774 SGL Bit Bucket Descriptor: Not Supported 00:12:15.774 SGL Metadata Pointer: Not Supported 00:12:15.774 Oversized SGL: Not Supported 00:12:15.774 SGL Metadata Address: Not Supported 00:12:15.774 SGL Offset: Not Supported 00:12:15.774 Transport SGL Data Block: Not Supported 00:12:15.774 Replay Protected Memory Block: Not Supported 00:12:15.774 00:12:15.774 Firmware Slot Information 00:12:15.774 ========================= 00:12:15.774 Active slot: 1 00:12:15.774 Slot 1 Firmware Revision: 1.0 00:12:15.774 00:12:15.774 00:12:15.774 Commands Supported and Effects 00:12:15.774 ============================== 00:12:15.774 Admin Commands 00:12:15.774 -------------- 00:12:15.774 Delete I/O Submission Queue (00h): Supported 00:12:15.774 Create I/O Submission Queue (01h): Supported 00:12:15.774 Get Log Page (02h): Supported 00:12:15.774 Delete I/O Completion Queue (04h): Supported 00:12:15.774 Create I/O Completion Queue (05h): Supported 00:12:15.774 Identify (06h): Supported 00:12:15.774 Abort (08h): Supported 00:12:15.774 Set Features (09h): Supported 00:12:15.774 Get Features (0Ah): Supported 00:12:15.774 Asynchronous Event Request (0Ch): Supported 00:12:15.774 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.774 Directive Send (19h): Supported 00:12:15.774 Directive Receive (1Ah): Supported 00:12:15.774 Virtualization Management (1Ch): Supported 00:12:15.774 Doorbell Buffer Config (7Ch): Supported 00:12:15.774 Format NVM (80h): Supported LBA-Change 00:12:15.774 I/O Commands 00:12:15.774 ------------ 00:12:15.774 Flush (00h): Supported LBA-Change 00:12:15.774 Write (01h): Supported LBA-Change 00:12:15.774 Read (02h): Supported 00:12:15.774 Compare (05h): Supported 00:12:15.774 Write Zeroes (08h): Supported LBA-Change 00:12:15.774 Dataset Management (09h): Supported LBA-Change 00:12:15.774 Unknown (0Ch): Supported 00:12:15.774 Unknown (12h): Supported 00:12:15.774 Copy (19h): Supported LBA-Change 00:12:15.774 Unknown (1Dh): Supported LBA-Change 00:12:15.774 00:12:15.774 Error Log 00:12:15.774 ========= 00:12:15.774 00:12:15.774 Arbitration 00:12:15.774 =========== 00:12:15.774 Arbitration Burst: no limit 00:12:15.774 00:12:15.774 Power Management 00:12:15.774 ================ 00:12:15.774 Number of Power States: 1 00:12:15.774 Current Power State: Power State #0 00:12:15.774 Power State #0: 00:12:15.774 Max Power: 25.00 W 00:12:15.774 Non-Operational State: Operational 00:12:15.774 Entry Latency: 16 microseconds 00:12:15.774 Exit Latency: 4 microseconds 00:12:15.774 Relative Read Throughput: 0 00:12:15.774 Relative Read Latency: 0 00:12:15.774 Relative Write Throughput: 0 00:12:15.774 Relative Write Latency: 0 00:12:15.774 Idle Power: Not Reported 00:12:15.774 Active Power: Not Reported 00:12:15.774 Non-Operational Permissive Mode: Not Supported 00:12:15.774 00:12:15.774 Health Information 00:12:15.774 ================== 00:12:15.774 Critical Warnings: 00:12:15.774 Available Spare Space: OK 00:12:15.774 Temperature: OK 00:12:15.774 Device Reliability: OK 00:12:15.774 Read Only: No 00:12:15.774 Volatile Memory Backup: OK 00:12:15.774 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.774 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.774 Available Spare: 0% 00:12:15.774 Available Spare Threshold: 0% 00:12:15.774 Life Percentage Used: 0% 00:12:15.774 Data Units Read: 744 00:12:15.774 Data Units Written: 673 00:12:15.774 Host Read Commands: 31272 00:12:15.774 Host Write Commands: 30695 00:12:15.774 Controller Busy Time: 0 minutes 00:12:15.774 Power Cycles: 0 00:12:15.774 Power On Hours: 0 hours 00:12:15.774 Unsafe Shutdowns: 0 00:12:15.774 Unrecoverable Media Errors: 0 00:12:15.774 Lifetime Error Log Entries: 0 00:12:15.774 Warning Temperature Time: 0 minutes 00:12:15.774 Critical Temperature Time: 0 minutes 00:12:15.774 00:12:15.774 Number of Queues 00:12:15.774 ================ 00:12:15.774 Number of I/O Submission Queues: 64 00:12:15.774 Number of I/O Completion Queues: 64 00:12:15.774 00:12:15.774 ZNS Specific Controller Data 00:12:15.774 ============================ 00:12:15.774 Zone Append Size Limit: 0 00:12:15.774 00:12:15.774 00:12:15.774 Active Namespaces 00:12:15.774 ================= 00:12:15.774 Namespace ID:1 00:12:15.774 Error Recovery Timeout: Unlimited 00:12:15.774 Command Set Identifier: NVM (00h) 00:12:15.774 Deallocate: Supported 00:12:15.774 Deallocated/Unwritten Error: Supported 00:12:15.774 Deallocated Read Value: All 0x00 00:12:15.774 Deallocate in Write Zeroes: Not Supported 00:12:15.774 Deallocated Guard Field: 0xFFFF 00:12:15.774 Flush: Supported 00:12:15.774 Reservation: Not Supported 00:12:15.774 Namespace Sharing Capabilities: Multiple Controllers 00:12:15.774 Size (in LBAs): 262144 (1GiB) 00:12:15.774 Capacity (in LBAs): 262144 (1GiB) 00:12:15.774 Utilization (in LBAs): 262144 (1GiB) 00:12:15.774 Thin Provisioning: Not Supported 00:12:15.774 Per-NS Atomic Units: No 00:12:15.774 Maximum Single Source Range Length: 128 00:12:15.774 Maximum Copy Length: 128 00:12:15.774 Maximum Source Range Count: 128 00:12:15.774 NGUID/EUI64 Never Reused: No 00:12:15.774 Namespace Write Protected: No 00:12:15.774 Endurance group ID: 1 00:12:15.774 Number of LBA Formats: 8 00:12:15.774 Current LBA Format: LBA Format #04 00:12:15.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.774 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.774 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.774 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.774 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.774 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.774 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.774 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.774 00:12:15.774 Get Feature FDP: 00:12:15.774 ================ 00:12:15.774 Enabled: Yes 00:12:15.774 FDP configuration index: 0 00:12:15.774 00:12:15.774 FDP configurations log page 00:12:15.774 =========================== 00:12:15.774 Number of FDP configurations: 1 00:12:15.774 Version: 0 00:12:15.774 Size: 112 00:12:15.774 FDP Configuration Descriptor: 0 00:12:15.774 Descriptor Size: 96 00:12:15.774 Reclaim Group Identifier format: 2 00:12:15.774 FDP Volatile Write Cache: Not Present 00:12:15.774 FDP Configuration: Valid 00:12:15.774 Vendor Specific Size: 0 00:12:15.774 Number of Reclaim Groups: 2 00:12:15.774 Number of Recalim Unit Handles: 8 00:12:15.774 Max Placement Identifiers: 128 00:12:15.774 Number of Namespaces Suppprted: 256 00:12:15.774 Reclaim unit Nominal Size: 6000000 bytes 00:12:15.774 Estimated Reclaim Unit Time Limit: Not Reported 00:12:15.774 RUH Desc #000: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #001: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #002: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #003: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #004: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #005: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #006: RUH Type: Initially Isolated 00:12:15.774 RUH Desc #007: RUH Type: Initially Isolated 00:12:15.774 00:12:15.774 FDP reclaim unit handle usage log page 00:12:15.774 ====================================== 00:12:15.774 Number of Reclaim Unit Handles: 8 00:12:15.774 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:15.774 RUH Usage Desc #001: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #002: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #003: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #004: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #005: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #006: RUH Attributes: Unused 00:12:15.774 RUH Usage Desc #007: RUH Attributes: Unused 00:12:15.774 00:12:15.774 FDP statistics log page 00:12:15.774 ======================= 00:12:15.774 Host bytes with metadata written: 426287104 00:12:15.774 Media[2024-11-26 20:40:10.689511] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64777 terminated unexpected 00:12:15.774 bytes with metadata written: 426332160 00:12:15.774 Media bytes erased: 0 00:12:15.774 00:12:15.774 FDP events log page 00:12:15.774 =================== 00:12:15.774 Number of FDP events: 0 00:12:15.774 00:12:15.774 NVM Specific Namespace Data 00:12:15.774 =========================== 00:12:15.774 Logical Block Storage Tag Mask: 0 00:12:15.774 Protection Information Capabilities: 00:12:15.774 16b Guard Protection Information Storage Tag Support: No 00:12:15.774 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.774 Storage Tag Check Read Support: No 00:12:15.774 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.774 ===================================================== 00:12:15.775 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:15.775 ===================================================== 00:12:15.775 Controller Capabilities/Features 00:12:15.775 ================================ 00:12:15.775 Vendor ID: 1b36 00:12:15.775 Subsystem Vendor ID: 1af4 00:12:15.775 Serial Number: 12342 00:12:15.775 Model Number: QEMU NVMe Ctrl 00:12:15.775 Firmware Version: 8.0.0 00:12:15.775 Recommended Arb Burst: 6 00:12:15.775 IEEE OUI Identifier: 00 54 52 00:12:15.775 Multi-path I/O 00:12:15.775 May have multiple subsystem ports: No 00:12:15.775 May have multiple controllers: No 00:12:15.775 Associated with SR-IOV VF: No 00:12:15.775 Max Data Transfer Size: 524288 00:12:15.775 Max Number of Namespaces: 256 00:12:15.775 Max Number of I/O Queues: 64 00:12:15.775 NVMe Specification Version (VS): 1.4 00:12:15.775 NVMe Specification Version (Identify): 1.4 00:12:15.775 Maximum Queue Entries: 2048 00:12:15.775 Contiguous Queues Required: Yes 00:12:15.775 Arbitration Mechanisms Supported 00:12:15.775 Weighted Round Robin: Not Supported 00:12:15.775 Vendor Specific: Not Supported 00:12:15.775 Reset Timeout: 7500 ms 00:12:15.775 Doorbell Stride: 4 bytes 00:12:15.775 NVM Subsystem Reset: Not Supported 00:12:15.775 Command Sets Supported 00:12:15.775 NVM Command Set: Supported 00:12:15.775 Boot Partition: Not Supported 00:12:15.775 Memory Page Size Minimum: 4096 bytes 00:12:15.775 Memory Page Size Maximum: 65536 bytes 00:12:15.775 Persistent Memory Region: Not Supported 00:12:15.775 Optional Asynchronous Events Supported 00:12:15.775 Namespace Attribute Notices: Supported 00:12:15.775 Firmware Activation Notices: Not Supported 00:12:15.775 ANA Change Notices: Not Supported 00:12:15.775 PLE Aggregate Log Change Notices: Not Supported 00:12:15.775 LBA Status Info Alert Notices: Not Supported 00:12:15.775 EGE Aggregate Log Change Notices: Not Supported 00:12:15.775 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.775 Zone Descriptor Change Notices: Not Supported 00:12:15.775 Discovery Log Change Notices: Not Supported 00:12:15.775 Controller Attributes 00:12:15.775 128-bit Host Identifier: Not Supported 00:12:15.775 Non-Operational Permissive Mode: Not Supported 00:12:15.775 NVM Sets: Not Supported 00:12:15.775 Read Recovery Levels: Not Supported 00:12:15.775 Endurance Groups: Not Supported 00:12:15.775 Predictable Latency Mode: Not Supported 00:12:15.775 Traffic Based Keep ALive: Not Supported 00:12:15.775 Namespace Granularity: Not Supported 00:12:15.775 SQ Associations: Not Supported 00:12:15.775 UUID List: Not Supported 00:12:15.775 Multi-Domain Subsystem: Not Supported 00:12:15.775 Fixed Capacity Management: Not Supported 00:12:15.775 Variable Capacity Management: Not Supported 00:12:15.775 Delete Endurance Group: Not Supported 00:12:15.775 Delete NVM Set: Not Supported 00:12:15.775 Extended LBA Formats Supported: Supported 00:12:15.775 Flexible Data Placement Supported: Not Supported 00:12:15.775 00:12:15.775 Controller Memory Buffer Support 00:12:15.775 ================================ 00:12:15.775 Supported: No 00:12:15.775 00:12:15.775 Persistent Memory Region Support 00:12:15.775 ================================ 00:12:15.775 Supported: No 00:12:15.775 00:12:15.775 Admin Command Set Attributes 00:12:15.775 ============================ 00:12:15.775 Security Send/Receive: Not Supported 00:12:15.775 Format NVM: Supported 00:12:15.775 Firmware Activate/Download: Not Supported 00:12:15.775 Namespace Management: Supported 00:12:15.775 Device Self-Test: Not Supported 00:12:15.775 Directives: Supported 00:12:15.775 NVMe-MI: Not Supported 00:12:15.775 Virtualization Management: Not Supported 00:12:15.775 Doorbell Buffer Config: Supported 00:12:15.775 Get LBA Status Capability: Not Supported 00:12:15.775 Command & Feature Lockdown Capability: Not Supported 00:12:15.775 Abort Command Limit: 4 00:12:15.775 Async Event Request Limit: 4 00:12:15.775 Number of Firmware Slots: N/A 00:12:15.775 Firmware Slot 1 Read-Only: N/A 00:12:15.775 Firmware Activation Without Reset: N/A 00:12:15.775 Multiple Update Detection Support: N/A 00:12:15.775 Firmware Update Granularity: No Information Provided 00:12:15.775 Per-Namespace SMART Log: Yes 00:12:15.775 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.775 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:15.775 Command Effects Log Page: Supported 00:12:15.775 Get Log Page Extended Data: Supported 00:12:15.775 Telemetry Log Pages: Not Supported 00:12:15.775 Persistent Event Log Pages: Not Supported 00:12:15.775 Supported Log Pages Log Page: May Support 00:12:15.775 Commands Supported & Effects Log Page: Not Supported 00:12:15.775 Feature Identifiers & Effects Log Page:May Support 00:12:15.775 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.775 Data Area 4 for Telemetry Log: Not Supported 00:12:15.775 Error Log Page Entries Supported: 1 00:12:15.775 Keep Alive: Not Supported 00:12:15.775 00:12:15.775 NVM Command Set Attributes 00:12:15.775 ========================== 00:12:15.775 Submission Queue Entry Size 00:12:15.775 Max: 64 00:12:15.775 Min: 64 00:12:15.775 Completion Queue Entry Size 00:12:15.775 Max: 16 00:12:15.775 Min: 16 00:12:15.775 Number of Namespaces: 256 00:12:15.775 Compare Command: Supported 00:12:15.775 Write Uncorrectable Command: Not Supported 00:12:15.775 Dataset Management Command: Supported 00:12:15.775 Write Zeroes Command: Supported 00:12:15.775 Set Features Save Field: Supported 00:12:15.775 Reservations: Not Supported 00:12:15.775 Timestamp: Supported 00:12:15.775 Copy: Supported 00:12:15.775 Volatile Write Cache: Present 00:12:15.775 Atomic Write Unit (Normal): 1 00:12:15.775 Atomic Write Unit (PFail): 1 00:12:15.775 Atomic Compare & Write Unit: 1 00:12:15.775 Fused Compare & Write: Not Supported 00:12:15.775 Scatter-Gather List 00:12:15.775 SGL Command Set: Supported 00:12:15.775 SGL Keyed: Not Supported 00:12:15.775 SGL Bit Bucket Descriptor: Not Supported 00:12:15.775 SGL Metadata Pointer: Not Supported 00:12:15.775 Oversized SGL: Not Supported 00:12:15.775 SGL Metadata Address: Not Supported 00:12:15.775 SGL Offset: Not Supported 00:12:15.775 Transport SGL Data Block: Not Supported 00:12:15.775 Replay Protected Memory Block: Not Supported 00:12:15.775 00:12:15.775 Firmware Slot Information 00:12:15.775 ========================= 00:12:15.775 Active slot: 1 00:12:15.775 Slot 1 Firmware Revision: 1.0 00:12:15.775 00:12:15.775 00:12:15.775 Commands Supported and Effects 00:12:15.775 ============================== 00:12:15.775 Admin Commands 00:12:15.775 -------------- 00:12:15.775 Delete I/O Submission Queue (00h): Supported 00:12:15.775 Create I/O Submission Queue (01h): Supported 00:12:15.775 Get Log Page (02h): Supported 00:12:15.775 Delete I/O Completion Queue (04h): Supported 00:12:15.775 Create I/O Completion Queue (05h): Supported 00:12:15.775 Identify (06h): Supported 00:12:15.775 Abort (08h): Supported 00:12:15.775 Set Features (09h): Supported 00:12:15.775 Get Features (0Ah): Supported 00:12:15.775 Asynchronous Event Request (0Ch): Supported 00:12:15.775 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.775 Directive Send (19h): Supported 00:12:15.775 Directive Receive (1Ah): Supported 00:12:15.775 Virtualization Management (1Ch): Supported 00:12:15.775 Doorbell Buffer Config (7Ch): Supported 00:12:15.775 Format NVM (80h): Supported LBA-Change 00:12:15.775 I/O Commands 00:12:15.775 ------------ 00:12:15.775 Flush (00h): Supported LBA-Change 00:12:15.775 Write (01h): Supported LBA-Change 00:12:15.775 Read (02h): Supported 00:12:15.775 Compare (05h): Supported 00:12:15.775 Write Zeroes (08h): Supported LBA-Change 00:12:15.775 Dataset Management (09h): Supported LBA-Change 00:12:15.775 Unknown (0Ch): Supported 00:12:15.775 Unknown (12h): Supported 00:12:15.775 Copy (19h): Supported LBA-Change 00:12:15.775 Unknown (1Dh): Supported LBA-Change 00:12:15.775 00:12:15.775 Error Log 00:12:15.775 ========= 00:12:15.775 00:12:15.775 Arbitration 00:12:15.775 =========== 00:12:15.775 Arbitration Burst: no limit 00:12:15.775 00:12:15.775 Power Management 00:12:15.775 ================ 00:12:15.775 Number of Power States: 1 00:12:15.775 Current Power State: Power State #0 00:12:15.775 Power State #0: 00:12:15.775 Max Power: 25.00 W 00:12:15.775 Non-Operational State: Operational 00:12:15.775 Entry Latency: 16 microseconds 00:12:15.775 Exit Latency: 4 microseconds 00:12:15.775 Relative Read Throughput: 0 00:12:15.775 Relative Read Latency: 0 00:12:15.775 Relative Write Throughput: 0 00:12:15.775 Relative Write Latency: 0 00:12:15.775 Idle Power: Not Reported 00:12:15.775 Active Power: Not Reported 00:12:15.775 Non-Operational Permissive Mode: Not Supported 00:12:15.775 00:12:15.775 Health Information 00:12:15.775 ================== 00:12:15.775 Critical Warnings: 00:12:15.775 Available Spare Space: OK 00:12:15.775 Temperature: OK 00:12:15.775 Device Reliability: OK 00:12:15.775 Read Only: No 00:12:15.775 Volatile Memory Backup: OK 00:12:15.775 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.775 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.775 Available Spare: 0% 00:12:15.775 Available Spare Threshold: 0% 00:12:15.775 Life Percentage Used: 0% 00:12:15.775 Data Units Read: 1899 00:12:15.775 Data Units Written: 1686 00:12:15.775 Host Read Commands: 91214 00:12:15.775 Host Write Commands: 89483 00:12:15.775 Controller Busy Time: 0 minutes 00:12:15.775 Power Cycles: 0 00:12:15.775 Power On Hours: 0 hours 00:12:15.775 Unsafe Shutdowns: 0 00:12:15.775 Unrecoverable Media Errors: 0 00:12:15.775 Lifetime Error Log Entries: 0 00:12:15.775 Warning Temperature Time: 0 minutes 00:12:15.775 Critical Temperature Time: 0 minutes 00:12:15.775 00:12:15.775 Number of Queues 00:12:15.775 ================ 00:12:15.775 Number of I/O Submission Queues: 64 00:12:15.775 Number of I/O Completion Queues: 64 00:12:15.775 00:12:15.775 ZNS Specific Controller Data 00:12:15.775 ============================ 00:12:15.775 Zone Append Size Limit: 0 00:12:15.775 00:12:15.775 00:12:15.775 Active Namespaces 00:12:15.776 ================= 00:12:15.776 Namespace ID:1 00:12:15.776 Error Recovery Timeout: Unlimited 00:12:15.776 Command Set Identifier: NVM (00h) 00:12:15.776 Deallocate: Supported 00:12:15.776 Deallocated/Unwritten Error: Supported 00:12:15.776 Deallocated Read Value: All 0x00 00:12:15.776 Deallocate in Write Zeroes: Not Supported 00:12:15.776 Deallocated Guard Field: 0xFFFF 00:12:15.776 Flush: Supported 00:12:15.776 Reservation: Not Supported 00:12:15.776 Namespace Sharing Capabilities: Private 00:12:15.776 Size (in LBAs): 1048576 (4GiB) 00:12:15.776 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.776 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.776 Thin Provisioning: Not Supported 00:12:15.776 Per-NS Atomic Units: No 00:12:15.776 Maximum Single Source Range Length: 128 00:12:15.776 Maximum Copy Length: 128 00:12:15.776 Maximum Source Range Count: 128 00:12:15.776 NGUID/EUI64 Never Reused: No 00:12:15.776 Namespace Write Protected: No 00:12:15.776 Number of LBA Formats: 8 00:12:15.776 Current LBA Format: LBA Format #04 00:12:15.776 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.776 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.776 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.776 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.776 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.776 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.776 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.776 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.776 00:12:15.776 NVM Specific Namespace Data 00:12:15.776 =========================== 00:12:15.776 Logical Block Storage Tag Mask: 0 00:12:15.776 Protection Information Capabilities: 00:12:15.776 16b Guard Protection Information Storage Tag Support: No 00:12:15.776 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.776 Storage Tag Check Read Support: No 00:12:15.776 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Namespace ID:2 00:12:15.776 Error Recovery Timeout: Unlimited 00:12:15.776 Command Set Identifier: NVM (00h) 00:12:15.776 Deallocate: Supported 00:12:15.776 Deallocated/Unwritten Error: Supported 00:12:15.776 Deallocated Read Value: All 0x00 00:12:15.776 Deallocate in Write Zeroes: Not Supported 00:12:15.776 Deallocated Guard Field: 0xFFFF 00:12:15.776 Flush: Supported 00:12:15.776 Reservation: Not Supported 00:12:15.776 Namespace Sharing Capabilities: Private 00:12:15.776 Size (in LBAs): 1048576 (4GiB) 00:12:15.776 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.776 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.776 Thin Provisioning: Not Supported 00:12:15.776 Per-NS Atomic Units: No 00:12:15.776 Maximum Single Source Range Length: 128 00:12:15.776 Maximum Copy Length: 128 00:12:15.776 Maximum Source Range Count: 128 00:12:15.776 NGUID/EUI64 Never Reused: No 00:12:15.776 Namespace Write Protected: No 00:12:15.776 Number of LBA Formats: 8 00:12:15.776 Current LBA Format: LBA Format #04 00:12:15.776 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.776 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.776 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.776 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.776 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.776 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.776 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.776 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.776 00:12:15.776 NVM Specific Namespace Data 00:12:15.776 =========================== 00:12:15.776 Logical Block Storage Tag Mask: 0 00:12:15.776 Protection Information Capabilities: 00:12:15.776 16b Guard Protection Information Storage Tag Support: No 00:12:15.776 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.776 Storage Tag Check Read Support: No 00:12:15.776 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Namespace ID:3 00:12:15.776 Error Recovery Timeout: Unlimited 00:12:15.776 Command Set Identifier: NVM (00h) 00:12:15.776 Deallocate: Supported 00:12:15.776 Deallocated/Unwritten Error: Supported 00:12:15.776 Deallocated Read Value: All 0x00 00:12:15.776 Deallocate in Write Zeroes: Not Supported 00:12:15.776 Deallocated Guard Field: 0xFFFF 00:12:15.776 Flush: Supported 00:12:15.776 Reservation: Not Supported 00:12:15.776 Namespace Sharing Capabilities: Private 00:12:15.776 Size (in LBAs): 1048576 (4GiB) 00:12:15.776 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.776 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.776 Thin Provisioning: Not Supported 00:12:15.776 Per-NS Atomic Units: No 00:12:15.776 Maximum Single Source Range Length: 128 00:12:15.776 Maximum Copy Length: 128 00:12:15.776 Maximum Source Range Count: 128 00:12:15.776 NGUID/EUI64 Never Reused: No 00:12:15.776 Namespace Write Protected: No 00:12:15.776 Number of LBA Formats: 8 00:12:15.776 Current LBA Format: LBA Format #04 00:12:15.776 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.776 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.776 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.776 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.776 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.776 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.776 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.776 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.776 00:12:15.776 NVM Specific Namespace Data 00:12:15.776 =========================== 00:12:15.776 Logical Block Storage Tag Mask: 0 00:12:15.776 Protection Information Capabilities: 00:12:15.776 16b Guard Protection Information Storage Tag Support: No 00:12:15.776 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.776 Storage Tag Check Read Support: No 00:12:15.776 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.776 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.034 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:16.034 20:40:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:16.293 ===================================================== 00:12:16.293 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:16.293 ===================================================== 00:12:16.293 Controller Capabilities/Features 00:12:16.293 ================================ 00:12:16.293 Vendor ID: 1b36 00:12:16.293 Subsystem Vendor ID: 1af4 00:12:16.293 Serial Number: 12340 00:12:16.293 Model Number: QEMU NVMe Ctrl 00:12:16.293 Firmware Version: 8.0.0 00:12:16.293 Recommended Arb Burst: 6 00:12:16.293 IEEE OUI Identifier: 00 54 52 00:12:16.293 Multi-path I/O 00:12:16.293 May have multiple subsystem ports: No 00:12:16.293 May have multiple controllers: No 00:12:16.293 Associated with SR-IOV VF: No 00:12:16.293 Max Data Transfer Size: 524288 00:12:16.293 Max Number of Namespaces: 256 00:12:16.293 Max Number of I/O Queues: 64 00:12:16.293 NVMe Specification Version (VS): 1.4 00:12:16.293 NVMe Specification Version (Identify): 1.4 00:12:16.293 Maximum Queue Entries: 2048 00:12:16.293 Contiguous Queues Required: Yes 00:12:16.293 Arbitration Mechanisms Supported 00:12:16.293 Weighted Round Robin: Not Supported 00:12:16.293 Vendor Specific: Not Supported 00:12:16.293 Reset Timeout: 7500 ms 00:12:16.293 Doorbell Stride: 4 bytes 00:12:16.293 NVM Subsystem Reset: Not Supported 00:12:16.293 Command Sets Supported 00:12:16.293 NVM Command Set: Supported 00:12:16.293 Boot Partition: Not Supported 00:12:16.293 Memory Page Size Minimum: 4096 bytes 00:12:16.293 Memory Page Size Maximum: 65536 bytes 00:12:16.293 Persistent Memory Region: Not Supported 00:12:16.293 Optional Asynchronous Events Supported 00:12:16.293 Namespace Attribute Notices: Supported 00:12:16.293 Firmware Activation Notices: Not Supported 00:12:16.293 ANA Change Notices: Not Supported 00:12:16.293 PLE Aggregate Log Change Notices: Not Supported 00:12:16.293 LBA Status Info Alert Notices: Not Supported 00:12:16.293 EGE Aggregate Log Change Notices: Not Supported 00:12:16.293 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.293 Zone Descriptor Change Notices: Not Supported 00:12:16.293 Discovery Log Change Notices: Not Supported 00:12:16.293 Controller Attributes 00:12:16.293 128-bit Host Identifier: Not Supported 00:12:16.293 Non-Operational Permissive Mode: Not Supported 00:12:16.293 NVM Sets: Not Supported 00:12:16.293 Read Recovery Levels: Not Supported 00:12:16.293 Endurance Groups: Not Supported 00:12:16.293 Predictable Latency Mode: Not Supported 00:12:16.293 Traffic Based Keep ALive: Not Supported 00:12:16.293 Namespace Granularity: Not Supported 00:12:16.293 SQ Associations: Not Supported 00:12:16.293 UUID List: Not Supported 00:12:16.294 Multi-Domain Subsystem: Not Supported 00:12:16.294 Fixed Capacity Management: Not Supported 00:12:16.294 Variable Capacity Management: Not Supported 00:12:16.294 Delete Endurance Group: Not Supported 00:12:16.294 Delete NVM Set: Not Supported 00:12:16.294 Extended LBA Formats Supported: Supported 00:12:16.294 Flexible Data Placement Supported: Not Supported 00:12:16.294 00:12:16.294 Controller Memory Buffer Support 00:12:16.294 ================================ 00:12:16.294 Supported: No 00:12:16.294 00:12:16.294 Persistent Memory Region Support 00:12:16.294 ================================ 00:12:16.294 Supported: No 00:12:16.294 00:12:16.294 Admin Command Set Attributes 00:12:16.294 ============================ 00:12:16.294 Security Send/Receive: Not Supported 00:12:16.294 Format NVM: Supported 00:12:16.294 Firmware Activate/Download: Not Supported 00:12:16.294 Namespace Management: Supported 00:12:16.294 Device Self-Test: Not Supported 00:12:16.294 Directives: Supported 00:12:16.294 NVMe-MI: Not Supported 00:12:16.294 Virtualization Management: Not Supported 00:12:16.294 Doorbell Buffer Config: Supported 00:12:16.294 Get LBA Status Capability: Not Supported 00:12:16.294 Command & Feature Lockdown Capability: Not Supported 00:12:16.294 Abort Command Limit: 4 00:12:16.294 Async Event Request Limit: 4 00:12:16.294 Number of Firmware Slots: N/A 00:12:16.294 Firmware Slot 1 Read-Only: N/A 00:12:16.294 Firmware Activation Without Reset: N/A 00:12:16.294 Multiple Update Detection Support: N/A 00:12:16.294 Firmware Update Granularity: No Information Provided 00:12:16.294 Per-Namespace SMART Log: Yes 00:12:16.294 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.294 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:16.294 Command Effects Log Page: Supported 00:12:16.294 Get Log Page Extended Data: Supported 00:12:16.294 Telemetry Log Pages: Not Supported 00:12:16.294 Persistent Event Log Pages: Not Supported 00:12:16.294 Supported Log Pages Log Page: May Support 00:12:16.294 Commands Supported & Effects Log Page: Not Supported 00:12:16.294 Feature Identifiers & Effects Log Page:May Support 00:12:16.294 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.294 Data Area 4 for Telemetry Log: Not Supported 00:12:16.294 Error Log Page Entries Supported: 1 00:12:16.294 Keep Alive: Not Supported 00:12:16.294 00:12:16.294 NVM Command Set Attributes 00:12:16.294 ========================== 00:12:16.294 Submission Queue Entry Size 00:12:16.294 Max: 64 00:12:16.294 Min: 64 00:12:16.294 Completion Queue Entry Size 00:12:16.294 Max: 16 00:12:16.294 Min: 16 00:12:16.294 Number of Namespaces: 256 00:12:16.294 Compare Command: Supported 00:12:16.294 Write Uncorrectable Command: Not Supported 00:12:16.294 Dataset Management Command: Supported 00:12:16.294 Write Zeroes Command: Supported 00:12:16.294 Set Features Save Field: Supported 00:12:16.294 Reservations: Not Supported 00:12:16.294 Timestamp: Supported 00:12:16.294 Copy: Supported 00:12:16.294 Volatile Write Cache: Present 00:12:16.294 Atomic Write Unit (Normal): 1 00:12:16.294 Atomic Write Unit (PFail): 1 00:12:16.294 Atomic Compare & Write Unit: 1 00:12:16.294 Fused Compare & Write: Not Supported 00:12:16.294 Scatter-Gather List 00:12:16.294 SGL Command Set: Supported 00:12:16.294 SGL Keyed: Not Supported 00:12:16.294 SGL Bit Bucket Descriptor: Not Supported 00:12:16.294 SGL Metadata Pointer: Not Supported 00:12:16.294 Oversized SGL: Not Supported 00:12:16.294 SGL Metadata Address: Not Supported 00:12:16.294 SGL Offset: Not Supported 00:12:16.294 Transport SGL Data Block: Not Supported 00:12:16.294 Replay Protected Memory Block: Not Supported 00:12:16.294 00:12:16.294 Firmware Slot Information 00:12:16.294 ========================= 00:12:16.294 Active slot: 1 00:12:16.294 Slot 1 Firmware Revision: 1.0 00:12:16.294 00:12:16.294 00:12:16.294 Commands Supported and Effects 00:12:16.294 ============================== 00:12:16.294 Admin Commands 00:12:16.294 -------------- 00:12:16.294 Delete I/O Submission Queue (00h): Supported 00:12:16.294 Create I/O Submission Queue (01h): Supported 00:12:16.294 Get Log Page (02h): Supported 00:12:16.294 Delete I/O Completion Queue (04h): Supported 00:12:16.294 Create I/O Completion Queue (05h): Supported 00:12:16.294 Identify (06h): Supported 00:12:16.294 Abort (08h): Supported 00:12:16.294 Set Features (09h): Supported 00:12:16.294 Get Features (0Ah): Supported 00:12:16.294 Asynchronous Event Request (0Ch): Supported 00:12:16.294 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:16.294 Directive Send (19h): Supported 00:12:16.294 Directive Receive (1Ah): Supported 00:12:16.294 Virtualization Management (1Ch): Supported 00:12:16.294 Doorbell Buffer Config (7Ch): Supported 00:12:16.294 Format NVM (80h): Supported LBA-Change 00:12:16.294 I/O Commands 00:12:16.294 ------------ 00:12:16.294 Flush (00h): Supported LBA-Change 00:12:16.294 Write (01h): Supported LBA-Change 00:12:16.294 Read (02h): Supported 00:12:16.294 Compare (05h): Supported 00:12:16.294 Write Zeroes (08h): Supported LBA-Change 00:12:16.294 Dataset Management (09h): Supported LBA-Change 00:12:16.294 Unknown (0Ch): Supported 00:12:16.294 Unknown (12h): Supported 00:12:16.294 Copy (19h): Supported LBA-Change 00:12:16.294 Unknown (1Dh): Supported LBA-Change 00:12:16.294 00:12:16.294 Error Log 00:12:16.294 ========= 00:12:16.294 00:12:16.294 Arbitration 00:12:16.294 =========== 00:12:16.294 Arbitration Burst: no limit 00:12:16.294 00:12:16.294 Power Management 00:12:16.294 ================ 00:12:16.294 Number of Power States: 1 00:12:16.294 Current Power State: Power State #0 00:12:16.294 Power State #0: 00:12:16.294 Max Power: 25.00 W 00:12:16.294 Non-Operational State: Operational 00:12:16.294 Entry Latency: 16 microseconds 00:12:16.294 Exit Latency: 4 microseconds 00:12:16.294 Relative Read Throughput: 0 00:12:16.294 Relative Read Latency: 0 00:12:16.294 Relative Write Throughput: 0 00:12:16.294 Relative Write Latency: 0 00:12:16.294 Idle Power: Not Reported 00:12:16.294 Active Power: Not Reported 00:12:16.294 Non-Operational Permissive Mode: Not Supported 00:12:16.294 00:12:16.294 Health Information 00:12:16.294 ================== 00:12:16.294 Critical Warnings: 00:12:16.294 Available Spare Space: OK 00:12:16.294 Temperature: OK 00:12:16.294 Device Reliability: OK 00:12:16.294 Read Only: No 00:12:16.294 Volatile Memory Backup: OK 00:12:16.294 Current Temperature: 323 Kelvin (50 Celsius) 00:12:16.294 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:16.294 Available Spare: 0% 00:12:16.294 Available Spare Threshold: 0% 00:12:16.294 Life Percentage Used: 0% 00:12:16.294 Data Units Read: 601 00:12:16.294 Data Units Written: 529 00:12:16.294 Host Read Commands: 29792 00:12:16.294 Host Write Commands: 29578 00:12:16.294 Controller Busy Time: 0 minutes 00:12:16.294 Power Cycles: 0 00:12:16.294 Power On Hours: 0 hours 00:12:16.294 Unsafe Shutdowns: 0 00:12:16.294 Unrecoverable Media Errors: 0 00:12:16.294 Lifetime Error Log Entries: 0 00:12:16.294 Warning Temperature Time: 0 minutes 00:12:16.294 Critical Temperature Time: 0 minutes 00:12:16.294 00:12:16.294 Number of Queues 00:12:16.294 ================ 00:12:16.294 Number of I/O Submission Queues: 64 00:12:16.294 Number of I/O Completion Queues: 64 00:12:16.294 00:12:16.294 ZNS Specific Controller Data 00:12:16.294 ============================ 00:12:16.294 Zone Append Size Limit: 0 00:12:16.294 00:12:16.294 00:12:16.294 Active Namespaces 00:12:16.294 ================= 00:12:16.294 Namespace ID:1 00:12:16.294 Error Recovery Timeout: Unlimited 00:12:16.294 Command Set Identifier: NVM (00h) 00:12:16.294 Deallocate: Supported 00:12:16.294 Deallocated/Unwritten Error: Supported 00:12:16.294 Deallocated Read Value: All 0x00 00:12:16.294 Deallocate in Write Zeroes: Not Supported 00:12:16.294 Deallocated Guard Field: 0xFFFF 00:12:16.294 Flush: Supported 00:12:16.294 Reservation: Not Supported 00:12:16.294 Metadata Transferred as: Separate Metadata Buffer 00:12:16.294 Namespace Sharing Capabilities: Private 00:12:16.294 Size (in LBAs): 1548666 (5GiB) 00:12:16.294 Capacity (in LBAs): 1548666 (5GiB) 00:12:16.294 Utilization (in LBAs): 1548666 (5GiB) 00:12:16.294 Thin Provisioning: Not Supported 00:12:16.294 Per-NS Atomic Units: No 00:12:16.294 Maximum Single Source Range Length: 128 00:12:16.294 Maximum Copy Length: 128 00:12:16.295 Maximum Source Range Count: 128 00:12:16.295 NGUID/EUI64 Never Reused: No 00:12:16.295 Namespace Write Protected: No 00:12:16.295 Number of LBA Formats: 8 00:12:16.295 Current LBA Format: LBA Format #07 00:12:16.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.295 00:12:16.295 NVM Specific Namespace Data 00:12:16.295 =========================== 00:12:16.295 Logical Block Storage Tag Mask: 0 00:12:16.295 Protection Information Capabilities: 00:12:16.295 16b Guard Protection Information Storage Tag Support: No 00:12:16.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.295 Storage Tag Check Read Support: No 00:12:16.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.295 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:16.295 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:16.553 ===================================================== 00:12:16.553 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:16.553 ===================================================== 00:12:16.553 Controller Capabilities/Features 00:12:16.553 ================================ 00:12:16.553 Vendor ID: 1b36 00:12:16.553 Subsystem Vendor ID: 1af4 00:12:16.553 Serial Number: 12341 00:12:16.553 Model Number: QEMU NVMe Ctrl 00:12:16.553 Firmware Version: 8.0.0 00:12:16.553 Recommended Arb Burst: 6 00:12:16.553 IEEE OUI Identifier: 00 54 52 00:12:16.553 Multi-path I/O 00:12:16.553 May have multiple subsystem ports: No 00:12:16.553 May have multiple controllers: No 00:12:16.553 Associated with SR-IOV VF: No 00:12:16.553 Max Data Transfer Size: 524288 00:12:16.553 Max Number of Namespaces: 256 00:12:16.553 Max Number of I/O Queues: 64 00:12:16.553 NVMe Specification Version (VS): 1.4 00:12:16.553 NVMe Specification Version (Identify): 1.4 00:12:16.553 Maximum Queue Entries: 2048 00:12:16.553 Contiguous Queues Required: Yes 00:12:16.553 Arbitration Mechanisms Supported 00:12:16.553 Weighted Round Robin: Not Supported 00:12:16.553 Vendor Specific: Not Supported 00:12:16.553 Reset Timeout: 7500 ms 00:12:16.553 Doorbell Stride: 4 bytes 00:12:16.553 NVM Subsystem Reset: Not Supported 00:12:16.553 Command Sets Supported 00:12:16.553 NVM Command Set: Supported 00:12:16.553 Boot Partition: Not Supported 00:12:16.553 Memory Page Size Minimum: 4096 bytes 00:12:16.553 Memory Page Size Maximum: 65536 bytes 00:12:16.553 Persistent Memory Region: Not Supported 00:12:16.553 Optional Asynchronous Events Supported 00:12:16.553 Namespace Attribute Notices: Supported 00:12:16.553 Firmware Activation Notices: Not Supported 00:12:16.553 ANA Change Notices: Not Supported 00:12:16.553 PLE Aggregate Log Change Notices: Not Supported 00:12:16.553 LBA Status Info Alert Notices: Not Supported 00:12:16.553 EGE Aggregate Log Change Notices: Not Supported 00:12:16.553 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.553 Zone Descriptor Change Notices: Not Supported 00:12:16.553 Discovery Log Change Notices: Not Supported 00:12:16.553 Controller Attributes 00:12:16.553 128-bit Host Identifier: Not Supported 00:12:16.553 Non-Operational Permissive Mode: Not Supported 00:12:16.553 NVM Sets: Not Supported 00:12:16.553 Read Recovery Levels: Not Supported 00:12:16.554 Endurance Groups: Not Supported 00:12:16.554 Predictable Latency Mode: Not Supported 00:12:16.554 Traffic Based Keep ALive: Not Supported 00:12:16.554 Namespace Granularity: Not Supported 00:12:16.554 SQ Associations: Not Supported 00:12:16.554 UUID List: Not Supported 00:12:16.554 Multi-Domain Subsystem: Not Supported 00:12:16.554 Fixed Capacity Management: Not Supported 00:12:16.554 Variable Capacity Management: Not Supported 00:12:16.554 Delete Endurance Group: Not Supported 00:12:16.554 Delete NVM Set: Not Supported 00:12:16.554 Extended LBA Formats Supported: Supported 00:12:16.554 Flexible Data Placement Supported: Not Supported 00:12:16.554 00:12:16.554 Controller Memory Buffer Support 00:12:16.554 ================================ 00:12:16.554 Supported: No 00:12:16.554 00:12:16.554 Persistent Memory Region Support 00:12:16.554 ================================ 00:12:16.554 Supported: No 00:12:16.554 00:12:16.554 Admin Command Set Attributes 00:12:16.554 ============================ 00:12:16.554 Security Send/Receive: Not Supported 00:12:16.554 Format NVM: Supported 00:12:16.554 Firmware Activate/Download: Not Supported 00:12:16.554 Namespace Management: Supported 00:12:16.554 Device Self-Test: Not Supported 00:12:16.554 Directives: Supported 00:12:16.554 NVMe-MI: Not Supported 00:12:16.554 Virtualization Management: Not Supported 00:12:16.554 Doorbell Buffer Config: Supported 00:12:16.554 Get LBA Status Capability: Not Supported 00:12:16.554 Command & Feature Lockdown Capability: Not Supported 00:12:16.554 Abort Command Limit: 4 00:12:16.554 Async Event Request Limit: 4 00:12:16.554 Number of Firmware Slots: N/A 00:12:16.554 Firmware Slot 1 Read-Only: N/A 00:12:16.554 Firmware Activation Without Reset: N/A 00:12:16.554 Multiple Update Detection Support: N/A 00:12:16.554 Firmware Update Granularity: No Information Provided 00:12:16.554 Per-Namespace SMART Log: Yes 00:12:16.554 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.554 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:16.554 Command Effects Log Page: Supported 00:12:16.554 Get Log Page Extended Data: Supported 00:12:16.554 Telemetry Log Pages: Not Supported 00:12:16.554 Persistent Event Log Pages: Not Supported 00:12:16.554 Supported Log Pages Log Page: May Support 00:12:16.554 Commands Supported & Effects Log Page: Not Supported 00:12:16.554 Feature Identifiers & Effects Log Page:May Support 00:12:16.554 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.554 Data Area 4 for Telemetry Log: Not Supported 00:12:16.554 Error Log Page Entries Supported: 1 00:12:16.554 Keep Alive: Not Supported 00:12:16.554 00:12:16.554 NVM Command Set Attributes 00:12:16.554 ========================== 00:12:16.554 Submission Queue Entry Size 00:12:16.554 Max: 64 00:12:16.554 Min: 64 00:12:16.554 Completion Queue Entry Size 00:12:16.554 Max: 16 00:12:16.554 Min: 16 00:12:16.554 Number of Namespaces: 256 00:12:16.554 Compare Command: Supported 00:12:16.554 Write Uncorrectable Command: Not Supported 00:12:16.554 Dataset Management Command: Supported 00:12:16.554 Write Zeroes Command: Supported 00:12:16.554 Set Features Save Field: Supported 00:12:16.554 Reservations: Not Supported 00:12:16.554 Timestamp: Supported 00:12:16.554 Copy: Supported 00:12:16.554 Volatile Write Cache: Present 00:12:16.554 Atomic Write Unit (Normal): 1 00:12:16.554 Atomic Write Unit (PFail): 1 00:12:16.554 Atomic Compare & Write Unit: 1 00:12:16.554 Fused Compare & Write: Not Supported 00:12:16.554 Scatter-Gather List 00:12:16.554 SGL Command Set: Supported 00:12:16.554 SGL Keyed: Not Supported 00:12:16.554 SGL Bit Bucket Descriptor: Not Supported 00:12:16.554 SGL Metadata Pointer: Not Supported 00:12:16.554 Oversized SGL: Not Supported 00:12:16.554 SGL Metadata Address: Not Supported 00:12:16.554 SGL Offset: Not Supported 00:12:16.554 Transport SGL Data Block: Not Supported 00:12:16.554 Replay Protected Memory Block: Not Supported 00:12:16.554 00:12:16.554 Firmware Slot Information 00:12:16.554 ========================= 00:12:16.554 Active slot: 1 00:12:16.554 Slot 1 Firmware Revision: 1.0 00:12:16.554 00:12:16.554 00:12:16.554 Commands Supported and Effects 00:12:16.554 ============================== 00:12:16.554 Admin Commands 00:12:16.554 -------------- 00:12:16.554 Delete I/O Submission Queue (00h): Supported 00:12:16.554 Create I/O Submission Queue (01h): Supported 00:12:16.554 Get Log Page (02h): Supported 00:12:16.554 Delete I/O Completion Queue (04h): Supported 00:12:16.554 Create I/O Completion Queue (05h): Supported 00:12:16.554 Identify (06h): Supported 00:12:16.554 Abort (08h): Supported 00:12:16.554 Set Features (09h): Supported 00:12:16.554 Get Features (0Ah): Supported 00:12:16.554 Asynchronous Event Request (0Ch): Supported 00:12:16.554 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:16.554 Directive Send (19h): Supported 00:12:16.554 Directive Receive (1Ah): Supported 00:12:16.554 Virtualization Management (1Ch): Supported 00:12:16.554 Doorbell Buffer Config (7Ch): Supported 00:12:16.554 Format NVM (80h): Supported LBA-Change 00:12:16.554 I/O Commands 00:12:16.554 ------------ 00:12:16.554 Flush (00h): Supported LBA-Change 00:12:16.554 Write (01h): Supported LBA-Change 00:12:16.554 Read (02h): Supported 00:12:16.554 Compare (05h): Supported 00:12:16.554 Write Zeroes (08h): Supported LBA-Change 00:12:16.554 Dataset Management (09h): Supported LBA-Change 00:12:16.554 Unknown (0Ch): Supported 00:12:16.554 Unknown (12h): Supported 00:12:16.554 Copy (19h): Supported LBA-Change 00:12:16.554 Unknown (1Dh): Supported LBA-Change 00:12:16.554 00:12:16.554 Error Log 00:12:16.554 ========= 00:12:16.554 00:12:16.554 Arbitration 00:12:16.554 =========== 00:12:16.554 Arbitration Burst: no limit 00:12:16.554 00:12:16.554 Power Management 00:12:16.554 ================ 00:12:16.554 Number of Power States: 1 00:12:16.554 Current Power State: Power State #0 00:12:16.554 Power State #0: 00:12:16.554 Max Power: 25.00 W 00:12:16.554 Non-Operational State: Operational 00:12:16.554 Entry Latency: 16 microseconds 00:12:16.554 Exit Latency: 4 microseconds 00:12:16.554 Relative Read Throughput: 0 00:12:16.554 Relative Read Latency: 0 00:12:16.554 Relative Write Throughput: 0 00:12:16.554 Relative Write Latency: 0 00:12:16.554 Idle Power: Not Reported 00:12:16.554 Active Power: Not Reported 00:12:16.554 Non-Operational Permissive Mode: Not Supported 00:12:16.554 00:12:16.554 Health Information 00:12:16.554 ================== 00:12:16.554 Critical Warnings: 00:12:16.554 Available Spare Space: OK 00:12:16.554 Temperature: OK 00:12:16.554 Device Reliability: OK 00:12:16.554 Read Only: No 00:12:16.554 Volatile Memory Backup: OK 00:12:16.554 Current Temperature: 323 Kelvin (50 Celsius) 00:12:16.554 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:16.554 Available Spare: 0% 00:12:16.554 Available Spare Threshold: 0% 00:12:16.554 Life Percentage Used: 0% 00:12:16.554 Data Units Read: 921 00:12:16.554 Data Units Written: 788 00:12:16.554 Host Read Commands: 44712 00:12:16.554 Host Write Commands: 43486 00:12:16.554 Controller Busy Time: 0 minutes 00:12:16.554 Power Cycles: 0 00:12:16.554 Power On Hours: 0 hours 00:12:16.554 Unsafe Shutdowns: 0 00:12:16.554 Unrecoverable Media Errors: 0 00:12:16.554 Lifetime Error Log Entries: 0 00:12:16.554 Warning Temperature Time: 0 minutes 00:12:16.554 Critical Temperature Time: 0 minutes 00:12:16.554 00:12:16.554 Number of Queues 00:12:16.554 ================ 00:12:16.554 Number of I/O Submission Queues: 64 00:12:16.554 Number of I/O Completion Queues: 64 00:12:16.554 00:12:16.554 ZNS Specific Controller Data 00:12:16.554 ============================ 00:12:16.554 Zone Append Size Limit: 0 00:12:16.554 00:12:16.554 00:12:16.554 Active Namespaces 00:12:16.554 ================= 00:12:16.554 Namespace ID:1 00:12:16.554 Error Recovery Timeout: Unlimited 00:12:16.554 Command Set Identifier: NVM (00h) 00:12:16.554 Deallocate: Supported 00:12:16.554 Deallocated/Unwritten Error: Supported 00:12:16.554 Deallocated Read Value: All 0x00 00:12:16.554 Deallocate in Write Zeroes: Not Supported 00:12:16.554 Deallocated Guard Field: 0xFFFF 00:12:16.554 Flush: Supported 00:12:16.554 Reservation: Not Supported 00:12:16.554 Namespace Sharing Capabilities: Private 00:12:16.554 Size (in LBAs): 1310720 (5GiB) 00:12:16.554 Capacity (in LBAs): 1310720 (5GiB) 00:12:16.554 Utilization (in LBAs): 1310720 (5GiB) 00:12:16.554 Thin Provisioning: Not Supported 00:12:16.554 Per-NS Atomic Units: No 00:12:16.554 Maximum Single Source Range Length: 128 00:12:16.554 Maximum Copy Length: 128 00:12:16.554 Maximum Source Range Count: 128 00:12:16.555 NGUID/EUI64 Never Reused: No 00:12:16.555 Namespace Write Protected: No 00:12:16.555 Number of LBA Formats: 8 00:12:16.555 Current LBA Format: LBA Format #04 00:12:16.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.555 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.555 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.555 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.555 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.555 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.555 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.555 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.555 00:12:16.555 NVM Specific Namespace Data 00:12:16.555 =========================== 00:12:16.555 Logical Block Storage Tag Mask: 0 00:12:16.555 Protection Information Capabilities: 00:12:16.555 16b Guard Protection Information Storage Tag Support: No 00:12:16.555 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.555 Storage Tag Check Read Support: No 00:12:16.555 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.555 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:16.555 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:17.125 ===================================================== 00:12:17.125 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:17.125 ===================================================== 00:12:17.125 Controller Capabilities/Features 00:12:17.125 ================================ 00:12:17.125 Vendor ID: 1b36 00:12:17.125 Subsystem Vendor ID: 1af4 00:12:17.125 Serial Number: 12342 00:12:17.125 Model Number: QEMU NVMe Ctrl 00:12:17.125 Firmware Version: 8.0.0 00:12:17.125 Recommended Arb Burst: 6 00:12:17.125 IEEE OUI Identifier: 00 54 52 00:12:17.125 Multi-path I/O 00:12:17.125 May have multiple subsystem ports: No 00:12:17.125 May have multiple controllers: No 00:12:17.125 Associated with SR-IOV VF: No 00:12:17.125 Max Data Transfer Size: 524288 00:12:17.125 Max Number of Namespaces: 256 00:12:17.125 Max Number of I/O Queues: 64 00:12:17.125 NVMe Specification Version (VS): 1.4 00:12:17.125 NVMe Specification Version (Identify): 1.4 00:12:17.125 Maximum Queue Entries: 2048 00:12:17.125 Contiguous Queues Required: Yes 00:12:17.125 Arbitration Mechanisms Supported 00:12:17.125 Weighted Round Robin: Not Supported 00:12:17.125 Vendor Specific: Not Supported 00:12:17.125 Reset Timeout: 7500 ms 00:12:17.125 Doorbell Stride: 4 bytes 00:12:17.125 NVM Subsystem Reset: Not Supported 00:12:17.125 Command Sets Supported 00:12:17.125 NVM Command Set: Supported 00:12:17.125 Boot Partition: Not Supported 00:12:17.125 Memory Page Size Minimum: 4096 bytes 00:12:17.125 Memory Page Size Maximum: 65536 bytes 00:12:17.125 Persistent Memory Region: Not Supported 00:12:17.125 Optional Asynchronous Events Supported 00:12:17.125 Namespace Attribute Notices: Supported 00:12:17.125 Firmware Activation Notices: Not Supported 00:12:17.125 ANA Change Notices: Not Supported 00:12:17.125 PLE Aggregate Log Change Notices: Not Supported 00:12:17.125 LBA Status Info Alert Notices: Not Supported 00:12:17.125 EGE Aggregate Log Change Notices: Not Supported 00:12:17.125 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.125 Zone Descriptor Change Notices: Not Supported 00:12:17.125 Discovery Log Change Notices: Not Supported 00:12:17.125 Controller Attributes 00:12:17.125 128-bit Host Identifier: Not Supported 00:12:17.125 Non-Operational Permissive Mode: Not Supported 00:12:17.125 NVM Sets: Not Supported 00:12:17.125 Read Recovery Levels: Not Supported 00:12:17.125 Endurance Groups: Not Supported 00:12:17.125 Predictable Latency Mode: Not Supported 00:12:17.125 Traffic Based Keep ALive: Not Supported 00:12:17.125 Namespace Granularity: Not Supported 00:12:17.125 SQ Associations: Not Supported 00:12:17.125 UUID List: Not Supported 00:12:17.125 Multi-Domain Subsystem: Not Supported 00:12:17.125 Fixed Capacity Management: Not Supported 00:12:17.125 Variable Capacity Management: Not Supported 00:12:17.125 Delete Endurance Group: Not Supported 00:12:17.125 Delete NVM Set: Not Supported 00:12:17.125 Extended LBA Formats Supported: Supported 00:12:17.125 Flexible Data Placement Supported: Not Supported 00:12:17.125 00:12:17.125 Controller Memory Buffer Support 00:12:17.125 ================================ 00:12:17.125 Supported: No 00:12:17.125 00:12:17.125 Persistent Memory Region Support 00:12:17.125 ================================ 00:12:17.125 Supported: No 00:12:17.125 00:12:17.125 Admin Command Set Attributes 00:12:17.125 ============================ 00:12:17.125 Security Send/Receive: Not Supported 00:12:17.125 Format NVM: Supported 00:12:17.125 Firmware Activate/Download: Not Supported 00:12:17.125 Namespace Management: Supported 00:12:17.125 Device Self-Test: Not Supported 00:12:17.125 Directives: Supported 00:12:17.125 NVMe-MI: Not Supported 00:12:17.125 Virtualization Management: Not Supported 00:12:17.125 Doorbell Buffer Config: Supported 00:12:17.125 Get LBA Status Capability: Not Supported 00:12:17.125 Command & Feature Lockdown Capability: Not Supported 00:12:17.125 Abort Command Limit: 4 00:12:17.125 Async Event Request Limit: 4 00:12:17.125 Number of Firmware Slots: N/A 00:12:17.125 Firmware Slot 1 Read-Only: N/A 00:12:17.125 Firmware Activation Without Reset: N/A 00:12:17.125 Multiple Update Detection Support: N/A 00:12:17.125 Firmware Update Granularity: No Information Provided 00:12:17.125 Per-Namespace SMART Log: Yes 00:12:17.125 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.125 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:17.125 Command Effects Log Page: Supported 00:12:17.125 Get Log Page Extended Data: Supported 00:12:17.125 Telemetry Log Pages: Not Supported 00:12:17.125 Persistent Event Log Pages: Not Supported 00:12:17.125 Supported Log Pages Log Page: May Support 00:12:17.125 Commands Supported & Effects Log Page: Not Supported 00:12:17.125 Feature Identifiers & Effects Log Page:May Support 00:12:17.125 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.125 Data Area 4 for Telemetry Log: Not Supported 00:12:17.125 Error Log Page Entries Supported: 1 00:12:17.125 Keep Alive: Not Supported 00:12:17.125 00:12:17.125 NVM Command Set Attributes 00:12:17.125 ========================== 00:12:17.125 Submission Queue Entry Size 00:12:17.125 Max: 64 00:12:17.125 Min: 64 00:12:17.125 Completion Queue Entry Size 00:12:17.125 Max: 16 00:12:17.125 Min: 16 00:12:17.125 Number of Namespaces: 256 00:12:17.125 Compare Command: Supported 00:12:17.125 Write Uncorrectable Command: Not Supported 00:12:17.125 Dataset Management Command: Supported 00:12:17.125 Write Zeroes Command: Supported 00:12:17.125 Set Features Save Field: Supported 00:12:17.125 Reservations: Not Supported 00:12:17.125 Timestamp: Supported 00:12:17.125 Copy: Supported 00:12:17.125 Volatile Write Cache: Present 00:12:17.125 Atomic Write Unit (Normal): 1 00:12:17.125 Atomic Write Unit (PFail): 1 00:12:17.125 Atomic Compare & Write Unit: 1 00:12:17.125 Fused Compare & Write: Not Supported 00:12:17.125 Scatter-Gather List 00:12:17.125 SGL Command Set: Supported 00:12:17.125 SGL Keyed: Not Supported 00:12:17.125 SGL Bit Bucket Descriptor: Not Supported 00:12:17.125 SGL Metadata Pointer: Not Supported 00:12:17.125 Oversized SGL: Not Supported 00:12:17.125 SGL Metadata Address: Not Supported 00:12:17.125 SGL Offset: Not Supported 00:12:17.125 Transport SGL Data Block: Not Supported 00:12:17.125 Replay Protected Memory Block: Not Supported 00:12:17.125 00:12:17.125 Firmware Slot Information 00:12:17.125 ========================= 00:12:17.125 Active slot: 1 00:12:17.125 Slot 1 Firmware Revision: 1.0 00:12:17.125 00:12:17.125 00:12:17.125 Commands Supported and Effects 00:12:17.125 ============================== 00:12:17.125 Admin Commands 00:12:17.125 -------------- 00:12:17.125 Delete I/O Submission Queue (00h): Supported 00:12:17.125 Create I/O Submission Queue (01h): Supported 00:12:17.125 Get Log Page (02h): Supported 00:12:17.125 Delete I/O Completion Queue (04h): Supported 00:12:17.125 Create I/O Completion Queue (05h): Supported 00:12:17.125 Identify (06h): Supported 00:12:17.125 Abort (08h): Supported 00:12:17.125 Set Features (09h): Supported 00:12:17.125 Get Features (0Ah): Supported 00:12:17.125 Asynchronous Event Request (0Ch): Supported 00:12:17.125 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.125 Directive Send (19h): Supported 00:12:17.125 Directive Receive (1Ah): Supported 00:12:17.125 Virtualization Management (1Ch): Supported 00:12:17.126 Doorbell Buffer Config (7Ch): Supported 00:12:17.126 Format NVM (80h): Supported LBA-Change 00:12:17.126 I/O Commands 00:12:17.126 ------------ 00:12:17.126 Flush (00h): Supported LBA-Change 00:12:17.126 Write (01h): Supported LBA-Change 00:12:17.126 Read (02h): Supported 00:12:17.126 Compare (05h): Supported 00:12:17.126 Write Zeroes (08h): Supported LBA-Change 00:12:17.126 Dataset Management (09h): Supported LBA-Change 00:12:17.126 Unknown (0Ch): Supported 00:12:17.126 Unknown (12h): Supported 00:12:17.126 Copy (19h): Supported LBA-Change 00:12:17.126 Unknown (1Dh): Supported LBA-Change 00:12:17.126 00:12:17.126 Error Log 00:12:17.126 ========= 00:12:17.126 00:12:17.126 Arbitration 00:12:17.126 =========== 00:12:17.126 Arbitration Burst: no limit 00:12:17.126 00:12:17.126 Power Management 00:12:17.126 ================ 00:12:17.126 Number of Power States: 1 00:12:17.126 Current Power State: Power State #0 00:12:17.126 Power State #0: 00:12:17.126 Max Power: 25.00 W 00:12:17.126 Non-Operational State: Operational 00:12:17.126 Entry Latency: 16 microseconds 00:12:17.126 Exit Latency: 4 microseconds 00:12:17.126 Relative Read Throughput: 0 00:12:17.126 Relative Read Latency: 0 00:12:17.126 Relative Write Throughput: 0 00:12:17.126 Relative Write Latency: 0 00:12:17.126 Idle Power: Not Reported 00:12:17.126 Active Power: Not Reported 00:12:17.126 Non-Operational Permissive Mode: Not Supported 00:12:17.126 00:12:17.126 Health Information 00:12:17.126 ================== 00:12:17.126 Critical Warnings: 00:12:17.126 Available Spare Space: OK 00:12:17.126 Temperature: OK 00:12:17.126 Device Reliability: OK 00:12:17.126 Read Only: No 00:12:17.126 Volatile Memory Backup: OK 00:12:17.126 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.126 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.126 Available Spare: 0% 00:12:17.126 Available Spare Threshold: 0% 00:12:17.126 Life Percentage Used: 0% 00:12:17.126 Data Units Read: 1899 00:12:17.126 Data Units Written: 1686 00:12:17.126 Host Read Commands: 91214 00:12:17.126 Host Write Commands: 89483 00:12:17.126 Controller Busy Time: 0 minutes 00:12:17.126 Power Cycles: 0 00:12:17.126 Power On Hours: 0 hours 00:12:17.126 Unsafe Shutdowns: 0 00:12:17.126 Unrecoverable Media Errors: 0 00:12:17.126 Lifetime Error Log Entries: 0 00:12:17.126 Warning Temperature Time: 0 minutes 00:12:17.126 Critical Temperature Time: 0 minutes 00:12:17.126 00:12:17.126 Number of Queues 00:12:17.126 ================ 00:12:17.126 Number of I/O Submission Queues: 64 00:12:17.126 Number of I/O Completion Queues: 64 00:12:17.126 00:12:17.126 ZNS Specific Controller Data 00:12:17.126 ============================ 00:12:17.126 Zone Append Size Limit: 0 00:12:17.126 00:12:17.126 00:12:17.126 Active Namespaces 00:12:17.126 ================= 00:12:17.126 Namespace ID:1 00:12:17.126 Error Recovery Timeout: Unlimited 00:12:17.126 Command Set Identifier: NVM (00h) 00:12:17.126 Deallocate: Supported 00:12:17.126 Deallocated/Unwritten Error: Supported 00:12:17.126 Deallocated Read Value: All 0x00 00:12:17.126 Deallocate in Write Zeroes: Not Supported 00:12:17.126 Deallocated Guard Field: 0xFFFF 00:12:17.126 Flush: Supported 00:12:17.126 Reservation: Not Supported 00:12:17.126 Namespace Sharing Capabilities: Private 00:12:17.126 Size (in LBAs): 1048576 (4GiB) 00:12:17.126 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.126 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.126 Thin Provisioning: Not Supported 00:12:17.126 Per-NS Atomic Units: No 00:12:17.126 Maximum Single Source Range Length: 128 00:12:17.126 Maximum Copy Length: 128 00:12:17.126 Maximum Source Range Count: 128 00:12:17.126 NGUID/EUI64 Never Reused: No 00:12:17.126 Namespace Write Protected: No 00:12:17.126 Number of LBA Formats: 8 00:12:17.126 Current LBA Format: LBA Format #04 00:12:17.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.126 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.126 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.126 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.126 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.126 00:12:17.126 NVM Specific Namespace Data 00:12:17.126 =========================== 00:12:17.126 Logical Block Storage Tag Mask: 0 00:12:17.126 Protection Information Capabilities: 00:12:17.126 16b Guard Protection Information Storage Tag Support: No 00:12:17.126 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.126 Storage Tag Check Read Support: No 00:12:17.126 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Namespace ID:2 00:12:17.126 Error Recovery Timeout: Unlimited 00:12:17.126 Command Set Identifier: NVM (00h) 00:12:17.126 Deallocate: Supported 00:12:17.126 Deallocated/Unwritten Error: Supported 00:12:17.126 Deallocated Read Value: All 0x00 00:12:17.126 Deallocate in Write Zeroes: Not Supported 00:12:17.126 Deallocated Guard Field: 0xFFFF 00:12:17.126 Flush: Supported 00:12:17.126 Reservation: Not Supported 00:12:17.126 Namespace Sharing Capabilities: Private 00:12:17.126 Size (in LBAs): 1048576 (4GiB) 00:12:17.126 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.126 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.126 Thin Provisioning: Not Supported 00:12:17.126 Per-NS Atomic Units: No 00:12:17.126 Maximum Single Source Range Length: 128 00:12:17.126 Maximum Copy Length: 128 00:12:17.126 Maximum Source Range Count: 128 00:12:17.126 NGUID/EUI64 Never Reused: No 00:12:17.126 Namespace Write Protected: No 00:12:17.126 Number of LBA Formats: 8 00:12:17.126 Current LBA Format: LBA Format #04 00:12:17.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.126 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.126 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.126 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.126 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.126 00:12:17.126 NVM Specific Namespace Data 00:12:17.126 =========================== 00:12:17.126 Logical Block Storage Tag Mask: 0 00:12:17.126 Protection Information Capabilities: 00:12:17.126 16b Guard Protection Information Storage Tag Support: No 00:12:17.126 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.126 Storage Tag Check Read Support: No 00:12:17.126 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.126 Namespace ID:3 00:12:17.126 Error Recovery Timeout: Unlimited 00:12:17.126 Command Set Identifier: NVM (00h) 00:12:17.126 Deallocate: Supported 00:12:17.126 Deallocated/Unwritten Error: Supported 00:12:17.126 Deallocated Read Value: All 0x00 00:12:17.126 Deallocate in Write Zeroes: Not Supported 00:12:17.126 Deallocated Guard Field: 0xFFFF 00:12:17.126 Flush: Supported 00:12:17.127 Reservation: Not Supported 00:12:17.127 Namespace Sharing Capabilities: Private 00:12:17.127 Size (in LBAs): 1048576 (4GiB) 00:12:17.127 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.127 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.127 Thin Provisioning: Not Supported 00:12:17.127 Per-NS Atomic Units: No 00:12:17.127 Maximum Single Source Range Length: 128 00:12:17.127 Maximum Copy Length: 128 00:12:17.127 Maximum Source Range Count: 128 00:12:17.127 NGUID/EUI64 Never Reused: No 00:12:17.127 Namespace Write Protected: No 00:12:17.127 Number of LBA Formats: 8 00:12:17.127 Current LBA Format: LBA Format #04 00:12:17.127 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.127 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.127 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.127 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.127 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.127 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.127 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.127 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.127 00:12:17.127 NVM Specific Namespace Data 00:12:17.127 =========================== 00:12:17.127 Logical Block Storage Tag Mask: 0 00:12:17.127 Protection Information Capabilities: 00:12:17.127 16b Guard Protection Information Storage Tag Support: No 00:12:17.127 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.127 Storage Tag Check Read Support: No 00:12:17.127 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.127 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:17.127 20:40:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:17.385 ===================================================== 00:12:17.385 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:17.385 ===================================================== 00:12:17.385 Controller Capabilities/Features 00:12:17.385 ================================ 00:12:17.385 Vendor ID: 1b36 00:12:17.385 Subsystem Vendor ID: 1af4 00:12:17.385 Serial Number: 12343 00:12:17.385 Model Number: QEMU NVMe Ctrl 00:12:17.385 Firmware Version: 8.0.0 00:12:17.385 Recommended Arb Burst: 6 00:12:17.385 IEEE OUI Identifier: 00 54 52 00:12:17.385 Multi-path I/O 00:12:17.385 May have multiple subsystem ports: No 00:12:17.385 May have multiple controllers: Yes 00:12:17.385 Associated with SR-IOV VF: No 00:12:17.385 Max Data Transfer Size: 524288 00:12:17.385 Max Number of Namespaces: 256 00:12:17.385 Max Number of I/O Queues: 64 00:12:17.385 NVMe Specification Version (VS): 1.4 00:12:17.385 NVMe Specification Version (Identify): 1.4 00:12:17.385 Maximum Queue Entries: 2048 00:12:17.385 Contiguous Queues Required: Yes 00:12:17.385 Arbitration Mechanisms Supported 00:12:17.385 Weighted Round Robin: Not Supported 00:12:17.385 Vendor Specific: Not Supported 00:12:17.385 Reset Timeout: 7500 ms 00:12:17.386 Doorbell Stride: 4 bytes 00:12:17.386 NVM Subsystem Reset: Not Supported 00:12:17.386 Command Sets Supported 00:12:17.386 NVM Command Set: Supported 00:12:17.386 Boot Partition: Not Supported 00:12:17.386 Memory Page Size Minimum: 4096 bytes 00:12:17.386 Memory Page Size Maximum: 65536 bytes 00:12:17.386 Persistent Memory Region: Not Supported 00:12:17.386 Optional Asynchronous Events Supported 00:12:17.386 Namespace Attribute Notices: Supported 00:12:17.386 Firmware Activation Notices: Not Supported 00:12:17.386 ANA Change Notices: Not Supported 00:12:17.386 PLE Aggregate Log Change Notices: Not Supported 00:12:17.386 LBA Status Info Alert Notices: Not Supported 00:12:17.386 EGE Aggregate Log Change Notices: Not Supported 00:12:17.386 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.386 Zone Descriptor Change Notices: Not Supported 00:12:17.386 Discovery Log Change Notices: Not Supported 00:12:17.386 Controller Attributes 00:12:17.386 128-bit Host Identifier: Not Supported 00:12:17.386 Non-Operational Permissive Mode: Not Supported 00:12:17.386 NVM Sets: Not Supported 00:12:17.386 Read Recovery Levels: Not Supported 00:12:17.386 Endurance Groups: Supported 00:12:17.386 Predictable Latency Mode: Not Supported 00:12:17.386 Traffic Based Keep ALive: Not Supported 00:12:17.386 Namespace Granularity: Not Supported 00:12:17.386 SQ Associations: Not Supported 00:12:17.386 UUID List: Not Supported 00:12:17.386 Multi-Domain Subsystem: Not Supported 00:12:17.386 Fixed Capacity Management: Not Supported 00:12:17.386 Variable Capacity Management: Not Supported 00:12:17.386 Delete Endurance Group: Not Supported 00:12:17.386 Delete NVM Set: Not Supported 00:12:17.386 Extended LBA Formats Supported: Supported 00:12:17.386 Flexible Data Placement Supported: Supported 00:12:17.386 00:12:17.386 Controller Memory Buffer Support 00:12:17.386 ================================ 00:12:17.386 Supported: No 00:12:17.386 00:12:17.386 Persistent Memory Region Support 00:12:17.386 ================================ 00:12:17.386 Supported: No 00:12:17.386 00:12:17.386 Admin Command Set Attributes 00:12:17.386 ============================ 00:12:17.386 Security Send/Receive: Not Supported 00:12:17.386 Format NVM: Supported 00:12:17.386 Firmware Activate/Download: Not Supported 00:12:17.386 Namespace Management: Supported 00:12:17.386 Device Self-Test: Not Supported 00:12:17.386 Directives: Supported 00:12:17.386 NVMe-MI: Not Supported 00:12:17.386 Virtualization Management: Not Supported 00:12:17.386 Doorbell Buffer Config: Supported 00:12:17.386 Get LBA Status Capability: Not Supported 00:12:17.386 Command & Feature Lockdown Capability: Not Supported 00:12:17.386 Abort Command Limit: 4 00:12:17.386 Async Event Request Limit: 4 00:12:17.386 Number of Firmware Slots: N/A 00:12:17.386 Firmware Slot 1 Read-Only: N/A 00:12:17.386 Firmware Activation Without Reset: N/A 00:12:17.386 Multiple Update Detection Support: N/A 00:12:17.386 Firmware Update Granularity: No Information Provided 00:12:17.386 Per-Namespace SMART Log: Yes 00:12:17.386 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.386 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:17.386 Command Effects Log Page: Supported 00:12:17.386 Get Log Page Extended Data: Supported 00:12:17.386 Telemetry Log Pages: Not Supported 00:12:17.386 Persistent Event Log Pages: Not Supported 00:12:17.386 Supported Log Pages Log Page: May Support 00:12:17.386 Commands Supported & Effects Log Page: Not Supported 00:12:17.386 Feature Identifiers & Effects Log Page:May Support 00:12:17.386 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.386 Data Area 4 for Telemetry Log: Not Supported 00:12:17.386 Error Log Page Entries Supported: 1 00:12:17.386 Keep Alive: Not Supported 00:12:17.386 00:12:17.386 NVM Command Set Attributes 00:12:17.386 ========================== 00:12:17.386 Submission Queue Entry Size 00:12:17.386 Max: 64 00:12:17.386 Min: 64 00:12:17.386 Completion Queue Entry Size 00:12:17.386 Max: 16 00:12:17.386 Min: 16 00:12:17.386 Number of Namespaces: 256 00:12:17.386 Compare Command: Supported 00:12:17.386 Write Uncorrectable Command: Not Supported 00:12:17.386 Dataset Management Command: Supported 00:12:17.386 Write Zeroes Command: Supported 00:12:17.386 Set Features Save Field: Supported 00:12:17.386 Reservations: Not Supported 00:12:17.386 Timestamp: Supported 00:12:17.386 Copy: Supported 00:12:17.386 Volatile Write Cache: Present 00:12:17.386 Atomic Write Unit (Normal): 1 00:12:17.386 Atomic Write Unit (PFail): 1 00:12:17.386 Atomic Compare & Write Unit: 1 00:12:17.386 Fused Compare & Write: Not Supported 00:12:17.386 Scatter-Gather List 00:12:17.386 SGL Command Set: Supported 00:12:17.386 SGL Keyed: Not Supported 00:12:17.386 SGL Bit Bucket Descriptor: Not Supported 00:12:17.386 SGL Metadata Pointer: Not Supported 00:12:17.386 Oversized SGL: Not Supported 00:12:17.386 SGL Metadata Address: Not Supported 00:12:17.386 SGL Offset: Not Supported 00:12:17.386 Transport SGL Data Block: Not Supported 00:12:17.386 Replay Protected Memory Block: Not Supported 00:12:17.386 00:12:17.386 Firmware Slot Information 00:12:17.386 ========================= 00:12:17.386 Active slot: 1 00:12:17.386 Slot 1 Firmware Revision: 1.0 00:12:17.386 00:12:17.386 00:12:17.386 Commands Supported and Effects 00:12:17.386 ============================== 00:12:17.386 Admin Commands 00:12:17.386 -------------- 00:12:17.386 Delete I/O Submission Queue (00h): Supported 00:12:17.386 Create I/O Submission Queue (01h): Supported 00:12:17.386 Get Log Page (02h): Supported 00:12:17.386 Delete I/O Completion Queue (04h): Supported 00:12:17.386 Create I/O Completion Queue (05h): Supported 00:12:17.386 Identify (06h): Supported 00:12:17.386 Abort (08h): Supported 00:12:17.386 Set Features (09h): Supported 00:12:17.386 Get Features (0Ah): Supported 00:12:17.386 Asynchronous Event Request (0Ch): Supported 00:12:17.386 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.386 Directive Send (19h): Supported 00:12:17.386 Directive Receive (1Ah): Supported 00:12:17.386 Virtualization Management (1Ch): Supported 00:12:17.386 Doorbell Buffer Config (7Ch): Supported 00:12:17.386 Format NVM (80h): Supported LBA-Change 00:12:17.386 I/O Commands 00:12:17.386 ------------ 00:12:17.386 Flush (00h): Supported LBA-Change 00:12:17.386 Write (01h): Supported LBA-Change 00:12:17.386 Read (02h): Supported 00:12:17.386 Compare (05h): Supported 00:12:17.386 Write Zeroes (08h): Supported LBA-Change 00:12:17.386 Dataset Management (09h): Supported LBA-Change 00:12:17.386 Unknown (0Ch): Supported 00:12:17.386 Unknown (12h): Supported 00:12:17.386 Copy (19h): Supported LBA-Change 00:12:17.386 Unknown (1Dh): Supported LBA-Change 00:12:17.386 00:12:17.386 Error Log 00:12:17.386 ========= 00:12:17.386 00:12:17.386 Arbitration 00:12:17.386 =========== 00:12:17.386 Arbitration Burst: no limit 00:12:17.386 00:12:17.386 Power Management 00:12:17.386 ================ 00:12:17.386 Number of Power States: 1 00:12:17.386 Current Power State: Power State #0 00:12:17.386 Power State #0: 00:12:17.386 Max Power: 25.00 W 00:12:17.386 Non-Operational State: Operational 00:12:17.386 Entry Latency: 16 microseconds 00:12:17.386 Exit Latency: 4 microseconds 00:12:17.386 Relative Read Throughput: 0 00:12:17.386 Relative Read Latency: 0 00:12:17.386 Relative Write Throughput: 0 00:12:17.386 Relative Write Latency: 0 00:12:17.386 Idle Power: Not Reported 00:12:17.386 Active Power: Not Reported 00:12:17.386 Non-Operational Permissive Mode: Not Supported 00:12:17.386 00:12:17.386 Health Information 00:12:17.386 ================== 00:12:17.386 Critical Warnings: 00:12:17.386 Available Spare Space: OK 00:12:17.386 Temperature: OK 00:12:17.386 Device Reliability: OK 00:12:17.386 Read Only: No 00:12:17.386 Volatile Memory Backup: OK 00:12:17.386 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.386 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.386 Available Spare: 0% 00:12:17.386 Available Spare Threshold: 0% 00:12:17.386 Life Percentage Used: 0% 00:12:17.386 Data Units Read: 744 00:12:17.386 Data Units Written: 673 00:12:17.386 Host Read Commands: 31272 00:12:17.386 Host Write Commands: 30695 00:12:17.386 Controller Busy Time: 0 minutes 00:12:17.386 Power Cycles: 0 00:12:17.386 Power On Hours: 0 hours 00:12:17.386 Unsafe Shutdowns: 0 00:12:17.386 Unrecoverable Media Errors: 0 00:12:17.386 Lifetime Error Log Entries: 0 00:12:17.386 Warning Temperature Time: 0 minutes 00:12:17.386 Critical Temperature Time: 0 minutes 00:12:17.386 00:12:17.386 Number of Queues 00:12:17.387 ================ 00:12:17.387 Number of I/O Submission Queues: 64 00:12:17.387 Number of I/O Completion Queues: 64 00:12:17.387 00:12:17.387 ZNS Specific Controller Data 00:12:17.387 ============================ 00:12:17.387 Zone Append Size Limit: 0 00:12:17.387 00:12:17.387 00:12:17.387 Active Namespaces 00:12:17.387 ================= 00:12:17.387 Namespace ID:1 00:12:17.387 Error Recovery Timeout: Unlimited 00:12:17.387 Command Set Identifier: NVM (00h) 00:12:17.387 Deallocate: Supported 00:12:17.387 Deallocated/Unwritten Error: Supported 00:12:17.387 Deallocated Read Value: All 0x00 00:12:17.387 Deallocate in Write Zeroes: Not Supported 00:12:17.387 Deallocated Guard Field: 0xFFFF 00:12:17.387 Flush: Supported 00:12:17.387 Reservation: Not Supported 00:12:17.387 Namespace Sharing Capabilities: Multiple Controllers 00:12:17.387 Size (in LBAs): 262144 (1GiB) 00:12:17.387 Capacity (in LBAs): 262144 (1GiB) 00:12:17.387 Utilization (in LBAs): 262144 (1GiB) 00:12:17.387 Thin Provisioning: Not Supported 00:12:17.387 Per-NS Atomic Units: No 00:12:17.387 Maximum Single Source Range Length: 128 00:12:17.387 Maximum Copy Length: 128 00:12:17.387 Maximum Source Range Count: 128 00:12:17.387 NGUID/EUI64 Never Reused: No 00:12:17.387 Namespace Write Protected: No 00:12:17.387 Endurance group ID: 1 00:12:17.387 Number of LBA Formats: 8 00:12:17.387 Current LBA Format: LBA Format #04 00:12:17.387 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.387 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.387 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.387 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.387 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.387 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.387 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.387 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.387 00:12:17.387 Get Feature FDP: 00:12:17.387 ================ 00:12:17.387 Enabled: Yes 00:12:17.387 FDP configuration index: 0 00:12:17.387 00:12:17.387 FDP configurations log page 00:12:17.387 =========================== 00:12:17.387 Number of FDP configurations: 1 00:12:17.387 Version: 0 00:12:17.387 Size: 112 00:12:17.387 FDP Configuration Descriptor: 0 00:12:17.387 Descriptor Size: 96 00:12:17.387 Reclaim Group Identifier format: 2 00:12:17.387 FDP Volatile Write Cache: Not Present 00:12:17.387 FDP Configuration: Valid 00:12:17.387 Vendor Specific Size: 0 00:12:17.387 Number of Reclaim Groups: 2 00:12:17.387 Number of Recalim Unit Handles: 8 00:12:17.387 Max Placement Identifiers: 128 00:12:17.387 Number of Namespaces Suppprted: 256 00:12:17.387 Reclaim unit Nominal Size: 6000000 bytes 00:12:17.387 Estimated Reclaim Unit Time Limit: Not Reported 00:12:17.387 RUH Desc #000: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #001: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #002: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #003: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #004: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #005: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #006: RUH Type: Initially Isolated 00:12:17.387 RUH Desc #007: RUH Type: Initially Isolated 00:12:17.387 00:12:17.387 FDP reclaim unit handle usage log page 00:12:17.387 ====================================== 00:12:17.387 Number of Reclaim Unit Handles: 8 00:12:17.387 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:17.387 RUH Usage Desc #001: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #002: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #003: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #004: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #005: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #006: RUH Attributes: Unused 00:12:17.387 RUH Usage Desc #007: RUH Attributes: Unused 00:12:17.387 00:12:17.387 FDP statistics log page 00:12:17.387 ======================= 00:12:17.387 Host bytes with metadata written: 426287104 00:12:17.387 Media bytes with metadata written: 426332160 00:12:17.387 Media bytes erased: 0 00:12:17.387 00:12:17.387 FDP events log page 00:12:17.387 =================== 00:12:17.387 Number of FDP events: 0 00:12:17.387 00:12:17.387 NVM Specific Namespace Data 00:12:17.387 =========================== 00:12:17.387 Logical Block Storage Tag Mask: 0 00:12:17.387 Protection Information Capabilities: 00:12:17.387 16b Guard Protection Information Storage Tag Support: No 00:12:17.387 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.387 Storage Tag Check Read Support: No 00:12:17.387 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.387 00:12:17.387 real 0m1.959s 00:12:17.387 user 0m0.778s 00:12:17.387 sys 0m0.971s 00:12:17.387 20:40:12 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.387 20:40:12 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 ************************************ 00:12:17.387 END TEST nvme_identify 00:12:17.387 ************************************ 00:12:17.387 20:40:12 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:17.387 20:40:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.387 20:40:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.387 20:40:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 ************************************ 00:12:17.387 START TEST nvme_perf 00:12:17.387 ************************************ 00:12:17.387 20:40:12 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:17.387 20:40:12 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:18.770 Initializing NVMe Controllers 00:12:18.770 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:18.770 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:18.770 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:18.770 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:18.770 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:18.770 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:18.770 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:18.770 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:18.770 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:18.770 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:18.770 Initialization complete. Launching workers. 00:12:18.770 ======================================================== 00:12:18.770 Latency(us) 00:12:18.770 Device Information : IOPS MiB/s Average min max 00:12:18.770 PCIE (0000:00:10.0) NSID 1 from core 0: 11452.55 134.21 11210.12 8086.84 48473.39 00:12:18.770 PCIE (0000:00:11.0) NSID 1 from core 0: 11452.55 134.21 11180.46 8206.35 45158.43 00:12:18.770 PCIE (0000:00:13.0) NSID 1 from core 0: 11452.55 134.21 11149.27 8214.72 42263.26 00:12:18.770 PCIE (0000:00:12.0) NSID 1 from core 0: 11452.55 134.21 11118.88 8199.31 38829.52 00:12:18.770 PCIE (0000:00:12.0) NSID 2 from core 0: 11452.55 134.21 11087.39 8187.42 35505.60 00:12:18.770 PCIE (0000:00:12.0) NSID 3 from core 0: 11452.55 134.21 11058.14 8163.51 32250.88 00:12:18.770 ======================================================== 00:12:18.770 Total : 68715.32 805.26 11134.04 8086.84 48473.39 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8363.642us 00:12:18.770 10.00000% : 8862.964us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12170.971us 00:12:18.770 90.00000% : 13544.107us 00:12:18.770 95.00000% : 14293.090us 00:12:18.770 98.00000% : 15229.318us 00:12:18.770 99.00000% : 37199.482us 00:12:18.770 99.50000% : 45687.954us 00:12:18.770 99.90000% : 47934.903us 00:12:18.770 99.99000% : 48434.225us 00:12:18.770 99.99900% : 48683.886us 00:12:18.770 99.99990% : 48683.886us 00:12:18.770 99.99999% : 48683.886us 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8426.057us 00:12:18.770 10.00000% : 8925.379us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12108.556us 00:12:18.770 90.00000% : 13544.107us 00:12:18.770 95.00000% : 14230.674us 00:12:18.770 98.00000% : 15416.564us 00:12:18.770 99.00000% : 34453.211us 00:12:18.770 99.50000% : 42442.362us 00:12:18.770 99.90000% : 44689.310us 00:12:18.770 99.99000% : 45188.632us 00:12:18.770 99.99900% : 45188.632us 00:12:18.770 99.99990% : 45188.632us 00:12:18.770 99.99999% : 45188.632us 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8426.057us 00:12:18.770 10.00000% : 8925.379us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12108.556us 00:12:18.770 90.00000% : 13606.522us 00:12:18.770 95.00000% : 14230.674us 00:12:18.770 98.00000% : 15603.810us 00:12:18.770 99.00000% : 31457.280us 00:12:18.770 99.50000% : 39696.091us 00:12:18.770 99.90000% : 41693.379us 00:12:18.770 99.99000% : 42442.362us 00:12:18.770 99.99900% : 42442.362us 00:12:18.770 99.99990% : 42442.362us 00:12:18.770 99.99999% : 42442.362us 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8426.057us 00:12:18.770 10.00000% : 8925.379us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12108.556us 00:12:18.770 90.00000% : 13606.522us 00:12:18.770 95.00000% : 14293.090us 00:12:18.770 98.00000% : 15728.640us 00:12:18.770 99.00000% : 28086.857us 00:12:18.770 99.50000% : 36450.499us 00:12:18.770 99.90000% : 38447.787us 00:12:18.770 99.99000% : 38947.109us 00:12:18.770 99.99900% : 38947.109us 00:12:18.770 99.99990% : 38947.109us 00:12:18.770 99.99999% : 38947.109us 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8426.057us 00:12:18.770 10.00000% : 8925.379us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12108.556us 00:12:18.770 90.00000% : 13606.522us 00:12:18.770 95.00000% : 14355.505us 00:12:18.770 98.00000% : 15603.810us 00:12:18.770 99.00000% : 24716.434us 00:12:18.770 99.50000% : 32955.246us 00:12:18.770 99.90000% : 35202.194us 00:12:18.770 99.99000% : 35701.516us 00:12:18.770 99.99900% : 35701.516us 00:12:18.770 99.99990% : 35701.516us 00:12:18.770 99.99999% : 35701.516us 00:12:18.770 00:12:18.770 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:18.770 ================================================================================= 00:12:18.770 1.00000% : 8426.057us 00:12:18.770 10.00000% : 8925.379us 00:12:18.770 25.00000% : 9487.116us 00:12:18.770 50.00000% : 10610.590us 00:12:18.770 75.00000% : 12108.556us 00:12:18.770 90.00000% : 13668.937us 00:12:18.770 95.00000% : 14355.505us 00:12:18.770 98.00000% : 15728.640us 00:12:18.770 99.00000% : 21595.672us 00:12:18.770 99.50000% : 29709.653us 00:12:18.770 99.90000% : 31831.771us 00:12:18.770 99.99000% : 32455.924us 00:12:18.770 99.99900% : 32455.924us 00:12:18.770 99.99990% : 32455.924us 00:12:18.770 99.99999% : 32455.924us 00:12:18.770 00:12:18.770 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:18.770 ============================================================================== 00:12:18.770 Range in us Cumulative IO count 00:12:18.770 8051.566 - 8113.981: 0.0786% ( 9) 00:12:18.770 8113.981 - 8176.396: 0.2095% ( 15) 00:12:18.770 8176.396 - 8238.811: 0.4976% ( 33) 00:12:18.770 8238.811 - 8301.227: 0.9602% ( 53) 00:12:18.771 8301.227 - 8363.642: 1.5450% ( 67) 00:12:18.771 8363.642 - 8426.057: 2.2957% ( 86) 00:12:18.771 8426.057 - 8488.472: 3.1163% ( 94) 00:12:18.771 8488.472 - 8550.888: 4.0852% ( 111) 00:12:18.771 8550.888 - 8613.303: 5.0367% ( 109) 00:12:18.771 8613.303 - 8675.718: 6.1714% ( 130) 00:12:18.771 8675.718 - 8738.133: 7.3935% ( 140) 00:12:18.771 8738.133 - 8800.549: 8.7029% ( 150) 00:12:18.771 8800.549 - 8862.964: 10.1170% ( 162) 00:12:18.771 8862.964 - 8925.379: 11.5049% ( 159) 00:12:18.771 8925.379 - 8987.794: 13.1634% ( 190) 00:12:18.771 8987.794 - 9050.210: 14.7870% ( 186) 00:12:18.771 9050.210 - 9112.625: 16.5066% ( 197) 00:12:18.771 9112.625 - 9175.040: 18.2874% ( 204) 00:12:18.771 9175.040 - 9237.455: 19.9895% ( 195) 00:12:18.771 9237.455 - 9299.870: 21.8226% ( 210) 00:12:18.771 9299.870 - 9362.286: 23.3415% ( 174) 00:12:18.771 9362.286 - 9424.701: 24.8691% ( 175) 00:12:18.771 9424.701 - 9487.116: 26.3355% ( 168) 00:12:18.771 9487.116 - 9549.531: 27.6624% ( 152) 00:12:18.771 9549.531 - 9611.947: 28.9804% ( 151) 00:12:18.771 9611.947 - 9674.362: 30.4906% ( 173) 00:12:18.771 9674.362 - 9736.777: 31.9920% ( 172) 00:12:18.771 9736.777 - 9799.192: 33.4584% ( 168) 00:12:18.771 9799.192 - 9861.608: 35.0035% ( 177) 00:12:18.771 9861.608 - 9924.023: 36.4001% ( 160) 00:12:18.771 9924.023 - 9986.438: 37.8753% ( 169) 00:12:18.771 9986.438 - 10048.853: 39.2371% ( 156) 00:12:18.771 10048.853 - 10111.269: 40.5901% ( 155) 00:12:18.771 10111.269 - 10173.684: 42.0566% ( 168) 00:12:18.771 10173.684 - 10236.099: 43.3048% ( 143) 00:12:18.771 10236.099 - 10298.514: 44.6316% ( 152) 00:12:18.771 10298.514 - 10360.930: 45.8537% ( 140) 00:12:18.771 10360.930 - 10423.345: 47.0670% ( 139) 00:12:18.771 10423.345 - 10485.760: 48.2280% ( 133) 00:12:18.771 10485.760 - 10548.175: 49.4501% ( 140) 00:12:18.771 10548.175 - 10610.590: 50.6198% ( 134) 00:12:18.771 10610.590 - 10673.006: 51.7109% ( 125) 00:12:18.771 10673.006 - 10735.421: 52.9155% ( 138) 00:12:18.771 10735.421 - 10797.836: 53.9193% ( 115) 00:12:18.771 10797.836 - 10860.251: 55.0279% ( 127) 00:12:18.771 10860.251 - 10922.667: 56.1191% ( 125) 00:12:18.771 10922.667 - 10985.082: 57.3848% ( 145) 00:12:18.771 10985.082 - 11047.497: 58.5545% ( 134) 00:12:18.771 11047.497 - 11109.912: 59.7765% ( 140) 00:12:18.771 11109.912 - 11172.328: 61.0248% ( 143) 00:12:18.771 11172.328 - 11234.743: 62.2469% ( 140) 00:12:18.771 11234.743 - 11297.158: 63.3118% ( 122) 00:12:18.771 11297.158 - 11359.573: 64.4728% ( 133) 00:12:18.771 11359.573 - 11421.989: 65.5290% ( 121) 00:12:18.771 11421.989 - 11484.404: 66.5765% ( 120) 00:12:18.771 11484.404 - 11546.819: 67.5890% ( 116) 00:12:18.771 11546.819 - 11609.234: 68.5667% ( 112) 00:12:18.771 11609.234 - 11671.650: 69.4483% ( 101) 00:12:18.771 11671.650 - 11734.065: 70.3561% ( 104) 00:12:18.771 11734.065 - 11796.480: 71.2116% ( 98) 00:12:18.771 11796.480 - 11858.895: 71.9710% ( 87) 00:12:18.771 11858.895 - 11921.310: 72.7566% ( 90) 00:12:18.771 11921.310 - 11983.726: 73.5248% ( 88) 00:12:18.771 11983.726 - 12046.141: 74.1969% ( 77) 00:12:18.771 12046.141 - 12108.556: 74.9564% ( 87) 00:12:18.771 12108.556 - 12170.971: 75.6459% ( 79) 00:12:18.771 12170.971 - 12233.387: 76.4054% ( 87) 00:12:18.771 12233.387 - 12295.802: 77.1561% ( 86) 00:12:18.771 12295.802 - 12358.217: 77.8020% ( 74) 00:12:18.771 12358.217 - 12420.632: 78.5527% ( 86) 00:12:18.771 12420.632 - 12483.048: 79.3733% ( 94) 00:12:18.771 12483.048 - 12545.463: 80.0105% ( 73) 00:12:18.771 12545.463 - 12607.878: 80.6826% ( 77) 00:12:18.771 12607.878 - 12670.293: 81.4333% ( 86) 00:12:18.771 12670.293 - 12732.709: 82.0705% ( 73) 00:12:18.771 12732.709 - 12795.124: 82.7427% ( 77) 00:12:18.771 12795.124 - 12857.539: 83.3275% ( 67) 00:12:18.771 12857.539 - 12919.954: 83.9560% ( 72) 00:12:18.771 12919.954 - 12982.370: 84.5496% ( 68) 00:12:18.771 12982.370 - 13044.785: 85.1432% ( 68) 00:12:18.771 13044.785 - 13107.200: 85.7193% ( 66) 00:12:18.771 13107.200 - 13169.615: 86.3390% ( 71) 00:12:18.771 13169.615 - 13232.030: 87.0199% ( 78) 00:12:18.771 13232.030 - 13294.446: 87.6484% ( 72) 00:12:18.771 13294.446 - 13356.861: 88.3118% ( 76) 00:12:18.771 13356.861 - 13419.276: 88.8966% ( 67) 00:12:18.771 13419.276 - 13481.691: 89.5251% ( 72) 00:12:18.771 13481.691 - 13544.107: 90.1100% ( 67) 00:12:18.771 13544.107 - 13606.522: 90.7123% ( 69) 00:12:18.771 13606.522 - 13668.937: 91.2360% ( 60) 00:12:18.771 13668.937 - 13731.352: 91.7161% ( 55) 00:12:18.771 13731.352 - 13793.768: 92.1875% ( 54) 00:12:18.771 13793.768 - 13856.183: 92.5890% ( 46) 00:12:18.771 13856.183 - 13918.598: 92.9731% ( 44) 00:12:18.771 13918.598 - 13981.013: 93.3572% ( 44) 00:12:18.771 13981.013 - 14043.429: 93.7936% ( 50) 00:12:18.771 14043.429 - 14105.844: 94.1253% ( 38) 00:12:18.771 14105.844 - 14168.259: 94.5182% ( 45) 00:12:18.771 14168.259 - 14230.674: 94.8848% ( 42) 00:12:18.771 14230.674 - 14293.090: 95.2078% ( 37) 00:12:18.771 14293.090 - 14355.505: 95.5656% ( 41) 00:12:18.771 14355.505 - 14417.920: 95.9410% ( 43) 00:12:18.771 14417.920 - 14480.335: 96.2291% ( 33) 00:12:18.771 14480.335 - 14542.750: 96.5258% ( 34) 00:12:18.771 14542.750 - 14605.166: 96.7790% ( 29) 00:12:18.771 14605.166 - 14667.581: 97.0496% ( 31) 00:12:18.771 14667.581 - 14729.996: 97.1805% ( 15) 00:12:18.771 14729.996 - 14792.411: 97.3115% ( 15) 00:12:18.771 14792.411 - 14854.827: 97.4686% ( 18) 00:12:18.771 14854.827 - 14917.242: 97.5559% ( 10) 00:12:18.771 14917.242 - 14979.657: 97.7043% ( 17) 00:12:18.771 14979.657 - 15042.072: 97.8003% ( 11) 00:12:18.771 15042.072 - 15104.488: 97.8701% ( 8) 00:12:18.771 15104.488 - 15166.903: 97.9487% ( 9) 00:12:18.771 15166.903 - 15229.318: 98.0185% ( 8) 00:12:18.771 15229.318 - 15291.733: 98.0622% ( 5) 00:12:18.771 15291.733 - 15354.149: 98.1058% ( 5) 00:12:18.771 15354.149 - 15416.564: 98.1494% ( 5) 00:12:18.771 15416.564 - 15478.979: 98.2105% ( 7) 00:12:18.771 15478.979 - 15541.394: 98.2455% ( 4) 00:12:18.771 15541.394 - 15603.810: 98.3066% ( 7) 00:12:18.771 15603.810 - 15666.225: 98.3502% ( 5) 00:12:18.771 15666.225 - 15728.640: 98.4026% ( 6) 00:12:18.771 15728.640 - 15791.055: 98.4462% ( 5) 00:12:18.771 15791.055 - 15853.470: 98.4724% ( 3) 00:12:18.771 15853.470 - 15915.886: 98.5248% ( 6) 00:12:18.771 15915.886 - 15978.301: 98.5597% ( 4) 00:12:18.771 15978.301 - 16103.131: 98.6470% ( 10) 00:12:18.771 16103.131 - 16227.962: 98.7081% ( 7) 00:12:18.771 16227.962 - 16352.792: 98.7954% ( 10) 00:12:18.771 16352.792 - 16477.623: 98.8652% ( 8) 00:12:18.771 16477.623 - 16602.453: 98.8827% ( 2) 00:12:18.771 36200.838 - 36450.499: 98.8914% ( 1) 00:12:18.771 36450.499 - 36700.160: 98.9351% ( 5) 00:12:18.771 36700.160 - 36949.821: 98.9787% ( 5) 00:12:18.771 36949.821 - 37199.482: 99.0136% ( 4) 00:12:18.771 37199.482 - 37449.143: 99.0660% ( 6) 00:12:18.771 37449.143 - 37698.804: 99.1184% ( 6) 00:12:18.771 37698.804 - 37948.465: 99.1620% ( 5) 00:12:18.772 37948.465 - 38198.126: 99.2057% ( 5) 00:12:18.772 38198.126 - 38447.787: 99.2493% ( 5) 00:12:18.772 38447.787 - 38697.448: 99.3017% ( 6) 00:12:18.772 38697.448 - 38947.109: 99.3453% ( 5) 00:12:18.772 38947.109 - 39196.770: 99.3977% ( 6) 00:12:18.772 39196.770 - 39446.430: 99.4413% ( 5) 00:12:18.772 44938.971 - 45188.632: 99.4501% ( 1) 00:12:18.772 45188.632 - 45438.293: 99.4850% ( 4) 00:12:18.772 45438.293 - 45687.954: 99.5286% ( 5) 00:12:18.772 45687.954 - 45937.615: 99.5723% ( 5) 00:12:18.772 45937.615 - 46187.276: 99.6072% ( 4) 00:12:18.772 46187.276 - 46436.937: 99.6508% ( 5) 00:12:18.772 46436.937 - 46686.598: 99.6945% ( 5) 00:12:18.772 46686.598 - 46936.259: 99.7294% ( 4) 00:12:18.772 46936.259 - 47185.920: 99.7730% ( 5) 00:12:18.772 47185.920 - 47435.581: 99.8167% ( 5) 00:12:18.772 47435.581 - 47685.242: 99.8603% ( 5) 00:12:18.772 47685.242 - 47934.903: 99.9040% ( 5) 00:12:18.772 47934.903 - 48184.564: 99.9476% ( 5) 00:12:18.772 48184.564 - 48434.225: 99.9913% ( 5) 00:12:18.772 48434.225 - 48683.886: 100.0000% ( 1) 00:12:18.772 00:12:18.772 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:18.772 ============================================================================== 00:12:18.772 Range in us Cumulative IO count 00:12:18.772 8176.396 - 8238.811: 0.0873% ( 10) 00:12:18.772 8238.811 - 8301.227: 0.3841% ( 34) 00:12:18.772 8301.227 - 8363.642: 0.8031% ( 48) 00:12:18.772 8363.642 - 8426.057: 1.5712% ( 88) 00:12:18.772 8426.057 - 8488.472: 2.3307% ( 87) 00:12:18.772 8488.472 - 8550.888: 3.1686% ( 96) 00:12:18.772 8550.888 - 8613.303: 4.2423% ( 123) 00:12:18.772 8613.303 - 8675.718: 5.4207% ( 135) 00:12:18.772 8675.718 - 8738.133: 6.6079% ( 136) 00:12:18.772 8738.133 - 8800.549: 8.0045% ( 160) 00:12:18.772 8800.549 - 8862.964: 9.4099% ( 161) 00:12:18.772 8862.964 - 8925.379: 10.8677% ( 167) 00:12:18.772 8925.379 - 8987.794: 12.5000% ( 187) 00:12:18.772 8987.794 - 9050.210: 14.2371% ( 199) 00:12:18.772 9050.210 - 9112.625: 16.0615% ( 209) 00:12:18.772 9112.625 - 9175.040: 17.7549% ( 194) 00:12:18.772 9175.040 - 9237.455: 19.4832% ( 198) 00:12:18.772 9237.455 - 9299.870: 21.2465% ( 202) 00:12:18.772 9299.870 - 9362.286: 22.7304% ( 170) 00:12:18.772 9362.286 - 9424.701: 24.1620% ( 164) 00:12:18.772 9424.701 - 9487.116: 25.5237% ( 156) 00:12:18.772 9487.116 - 9549.531: 27.0164% ( 171) 00:12:18.772 9549.531 - 9611.947: 28.5353% ( 174) 00:12:18.772 9611.947 - 9674.362: 30.1501% ( 185) 00:12:18.772 9674.362 - 9736.777: 31.7301% ( 181) 00:12:18.772 9736.777 - 9799.192: 33.6767% ( 223) 00:12:18.772 9799.192 - 9861.608: 35.3177% ( 188) 00:12:18.772 9861.608 - 9924.023: 36.9501% ( 187) 00:12:18.772 9924.023 - 9986.438: 38.4602% ( 173) 00:12:18.772 9986.438 - 10048.853: 40.0140% ( 178) 00:12:18.772 10048.853 - 10111.269: 41.3844% ( 157) 00:12:18.772 10111.269 - 10173.684: 42.7287% ( 154) 00:12:18.772 10173.684 - 10236.099: 44.0293% ( 149) 00:12:18.772 10236.099 - 10298.514: 45.3736% ( 154) 00:12:18.772 10298.514 - 10360.930: 46.5957% ( 140) 00:12:18.772 10360.930 - 10423.345: 47.6606% ( 122) 00:12:18.772 10423.345 - 10485.760: 48.6121% ( 109) 00:12:18.772 10485.760 - 10548.175: 49.6334% ( 117) 00:12:18.772 10548.175 - 10610.590: 50.6721% ( 119) 00:12:18.772 10610.590 - 10673.006: 51.7807% ( 127) 00:12:18.772 10673.006 - 10735.421: 52.8893% ( 127) 00:12:18.772 10735.421 - 10797.836: 54.1027% ( 139) 00:12:18.772 10797.836 - 10860.251: 55.2985% ( 137) 00:12:18.772 10860.251 - 10922.667: 56.3809% ( 124) 00:12:18.772 10922.667 - 10985.082: 57.4721% ( 125) 00:12:18.772 10985.082 - 11047.497: 58.4846% ( 116) 00:12:18.772 11047.497 - 11109.912: 59.6805% ( 137) 00:12:18.772 11109.912 - 11172.328: 60.8153% ( 130) 00:12:18.772 11172.328 - 11234.743: 61.9152% ( 126) 00:12:18.772 11234.743 - 11297.158: 63.0499% ( 130) 00:12:18.772 11297.158 - 11359.573: 64.1323% ( 124) 00:12:18.772 11359.573 - 11421.989: 65.2409% ( 127) 00:12:18.772 11421.989 - 11484.404: 66.3321% ( 125) 00:12:18.772 11484.404 - 11546.819: 67.3272% ( 114) 00:12:18.772 11546.819 - 11609.234: 68.3659% ( 119) 00:12:18.772 11609.234 - 11671.650: 69.3436% ( 112) 00:12:18.772 11671.650 - 11734.065: 70.4260% ( 124) 00:12:18.772 11734.065 - 11796.480: 71.4036% ( 112) 00:12:18.772 11796.480 - 11858.895: 72.2765% ( 100) 00:12:18.772 11858.895 - 11921.310: 73.0709% ( 91) 00:12:18.772 11921.310 - 11983.726: 73.8303% ( 87) 00:12:18.772 11983.726 - 12046.141: 74.6072% ( 89) 00:12:18.772 12046.141 - 12108.556: 75.2270% ( 71) 00:12:18.772 12108.556 - 12170.971: 75.9253% ( 80) 00:12:18.772 12170.971 - 12233.387: 76.6847% ( 87) 00:12:18.772 12233.387 - 12295.802: 77.5227% ( 96) 00:12:18.772 12295.802 - 12358.217: 78.2036% ( 78) 00:12:18.772 12358.217 - 12420.632: 78.8582% ( 75) 00:12:18.772 12420.632 - 12483.048: 79.5915% ( 84) 00:12:18.772 12483.048 - 12545.463: 80.2549% ( 76) 00:12:18.772 12545.463 - 12607.878: 80.9532% ( 80) 00:12:18.772 12607.878 - 12670.293: 81.5904% ( 73) 00:12:18.772 12670.293 - 12732.709: 82.2451% ( 75) 00:12:18.772 12732.709 - 12795.124: 82.8911% ( 74) 00:12:18.772 12795.124 - 12857.539: 83.4846% ( 68) 00:12:18.772 12857.539 - 12919.954: 84.1219% ( 73) 00:12:18.772 12919.954 - 12982.370: 84.7329% ( 70) 00:12:18.772 12982.370 - 13044.785: 85.3439% ( 70) 00:12:18.772 13044.785 - 13107.200: 85.9550% ( 70) 00:12:18.772 13107.200 - 13169.615: 86.5136% ( 64) 00:12:18.772 13169.615 - 13232.030: 87.1858% ( 77) 00:12:18.772 13232.030 - 13294.446: 87.7793% ( 68) 00:12:18.772 13294.446 - 13356.861: 88.4253% ( 74) 00:12:18.772 13356.861 - 13419.276: 89.0538% ( 72) 00:12:18.772 13419.276 - 13481.691: 89.5688% ( 59) 00:12:18.772 13481.691 - 13544.107: 90.0576% ( 56) 00:12:18.772 13544.107 - 13606.522: 90.6075% ( 63) 00:12:18.772 13606.522 - 13668.937: 91.1400% ( 61) 00:12:18.772 13668.937 - 13731.352: 91.6638% ( 60) 00:12:18.772 13731.352 - 13793.768: 92.1439% ( 55) 00:12:18.772 13793.768 - 13856.183: 92.5978% ( 52) 00:12:18.772 13856.183 - 13918.598: 93.0080% ( 47) 00:12:18.772 13918.598 - 13981.013: 93.5056% ( 57) 00:12:18.772 13981.013 - 14043.429: 93.8897% ( 44) 00:12:18.772 14043.429 - 14105.844: 94.2825% ( 45) 00:12:18.772 14105.844 - 14168.259: 94.6491% ( 42) 00:12:18.772 14168.259 - 14230.674: 95.0332% ( 44) 00:12:18.772 14230.674 - 14293.090: 95.3998% ( 42) 00:12:18.772 14293.090 - 14355.505: 95.7664% ( 42) 00:12:18.772 14355.505 - 14417.920: 96.0981% ( 38) 00:12:18.772 14417.920 - 14480.335: 96.3687% ( 31) 00:12:18.772 14480.335 - 14542.750: 96.6131% ( 28) 00:12:18.772 14542.750 - 14605.166: 96.8488% ( 27) 00:12:18.772 14605.166 - 14667.581: 97.0583% ( 24) 00:12:18.772 14667.581 - 14729.996: 97.2329% ( 20) 00:12:18.772 14729.996 - 14792.411: 97.3726% ( 16) 00:12:18.772 14792.411 - 14854.827: 97.5122% ( 16) 00:12:18.772 14854.827 - 14917.242: 97.6082% ( 11) 00:12:18.772 14917.242 - 14979.657: 97.6781% ( 8) 00:12:18.772 14979.657 - 15042.072: 97.7304% ( 6) 00:12:18.772 15042.072 - 15104.488: 97.7566% ( 3) 00:12:18.772 15104.488 - 15166.903: 97.8177% ( 7) 00:12:18.772 15166.903 - 15229.318: 97.8788% ( 7) 00:12:18.772 15229.318 - 15291.733: 97.9138% ( 4) 00:12:18.772 15291.733 - 15354.149: 97.9749% ( 7) 00:12:18.773 15354.149 - 15416.564: 98.0010% ( 3) 00:12:18.773 15416.564 - 15478.979: 98.0447% ( 5) 00:12:18.773 15478.979 - 15541.394: 98.0796% ( 4) 00:12:18.773 15541.394 - 15603.810: 98.1320% ( 6) 00:12:18.773 15603.810 - 15666.225: 98.1756% ( 5) 00:12:18.773 15666.225 - 15728.640: 98.2193% ( 5) 00:12:18.773 15728.640 - 15791.055: 98.2716% ( 6) 00:12:18.773 15791.055 - 15853.470: 98.3153% ( 5) 00:12:18.773 15853.470 - 15915.886: 98.3589% ( 5) 00:12:18.773 15915.886 - 15978.301: 98.4026% ( 5) 00:12:18.773 15978.301 - 16103.131: 98.4811% ( 9) 00:12:18.773 16103.131 - 16227.962: 98.5597% ( 9) 00:12:18.773 16227.962 - 16352.792: 98.6383% ( 9) 00:12:18.773 16352.792 - 16477.623: 98.7168% ( 9) 00:12:18.773 16477.623 - 16602.453: 98.7430% ( 3) 00:12:18.773 16602.453 - 16727.284: 98.7779% ( 4) 00:12:18.773 16727.284 - 16852.114: 98.8128% ( 4) 00:12:18.773 16852.114 - 16976.945: 98.8478% ( 4) 00:12:18.773 16976.945 - 17101.775: 98.8740% ( 3) 00:12:18.773 17101.775 - 17226.606: 98.8827% ( 1) 00:12:18.773 33454.568 - 33704.229: 98.8914% ( 1) 00:12:18.773 33704.229 - 33953.890: 98.9438% ( 6) 00:12:18.773 33953.890 - 34203.550: 98.9962% ( 6) 00:12:18.773 34203.550 - 34453.211: 99.0398% ( 5) 00:12:18.773 34453.211 - 34702.872: 99.0922% ( 6) 00:12:18.773 34702.872 - 34952.533: 99.1358% ( 5) 00:12:18.773 34952.533 - 35202.194: 99.1882% ( 6) 00:12:18.773 35202.194 - 35451.855: 99.2318% ( 5) 00:12:18.773 35451.855 - 35701.516: 99.2842% ( 6) 00:12:18.773 35701.516 - 35951.177: 99.3191% ( 4) 00:12:18.773 35951.177 - 36200.838: 99.3715% ( 6) 00:12:18.773 36200.838 - 36450.499: 99.4152% ( 5) 00:12:18.773 36450.499 - 36700.160: 99.4413% ( 3) 00:12:18.773 41943.040 - 42192.701: 99.4501% ( 1) 00:12:18.773 42192.701 - 42442.362: 99.5024% ( 6) 00:12:18.773 42442.362 - 42692.023: 99.5461% ( 5) 00:12:18.773 42692.023 - 42941.684: 99.5810% ( 4) 00:12:18.773 42941.684 - 43191.345: 99.6247% ( 5) 00:12:18.773 43191.345 - 43441.006: 99.6770% ( 6) 00:12:18.773 43441.006 - 43690.667: 99.7207% ( 5) 00:12:18.773 43690.667 - 43940.328: 99.7643% ( 5) 00:12:18.773 43940.328 - 44189.989: 99.8080% ( 5) 00:12:18.773 44189.989 - 44439.650: 99.8603% ( 6) 00:12:18.773 44439.650 - 44689.310: 99.9040% ( 5) 00:12:18.773 44689.310 - 44938.971: 99.9564% ( 6) 00:12:18.773 44938.971 - 45188.632: 100.0000% ( 5) 00:12:18.773 00:12:18.773 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:18.773 ============================================================================== 00:12:18.773 Range in us Cumulative IO count 00:12:18.773 8176.396 - 8238.811: 0.0436% ( 5) 00:12:18.773 8238.811 - 8301.227: 0.3142% ( 31) 00:12:18.773 8301.227 - 8363.642: 0.9689% ( 75) 00:12:18.773 8363.642 - 8426.057: 1.5014% ( 61) 00:12:18.773 8426.057 - 8488.472: 2.2608% ( 87) 00:12:18.773 8488.472 - 8550.888: 3.2734% ( 116) 00:12:18.773 8550.888 - 8613.303: 4.2947% ( 117) 00:12:18.773 8613.303 - 8675.718: 5.4207% ( 129) 00:12:18.773 8675.718 - 8738.133: 6.6603% ( 142) 00:12:18.773 8738.133 - 8800.549: 8.0133% ( 155) 00:12:18.773 8800.549 - 8862.964: 9.3837% ( 157) 00:12:18.773 8862.964 - 8925.379: 10.9200% ( 176) 00:12:18.773 8925.379 - 8987.794: 12.5000% ( 181) 00:12:18.773 8987.794 - 9050.210: 14.3069% ( 207) 00:12:18.773 9050.210 - 9112.625: 16.2273% ( 220) 00:12:18.773 9112.625 - 9175.040: 17.9906% ( 202) 00:12:18.773 9175.040 - 9237.455: 19.6491% ( 190) 00:12:18.773 9237.455 - 9299.870: 21.1854% ( 176) 00:12:18.773 9299.870 - 9362.286: 22.7916% ( 184) 00:12:18.773 9362.286 - 9424.701: 24.2057% ( 162) 00:12:18.773 9424.701 - 9487.116: 25.5674% ( 156) 00:12:18.773 9487.116 - 9549.531: 26.9029% ( 153) 00:12:18.773 9549.531 - 9611.947: 28.2210% ( 151) 00:12:18.773 9611.947 - 9674.362: 29.6788% ( 167) 00:12:18.773 9674.362 - 9736.777: 31.2151% ( 176) 00:12:18.773 9736.777 - 9799.192: 32.9609% ( 200) 00:12:18.773 9799.192 - 9861.608: 34.5583% ( 183) 00:12:18.773 9861.608 - 9924.023: 36.1383% ( 181) 00:12:18.773 9924.023 - 9986.438: 37.7270% ( 182) 00:12:18.773 9986.438 - 10048.853: 39.2371% ( 173) 00:12:18.773 10048.853 - 10111.269: 40.7210% ( 170) 00:12:18.773 10111.269 - 10173.684: 42.1177% ( 160) 00:12:18.773 10173.684 - 10236.099: 43.3485% ( 141) 00:12:18.773 10236.099 - 10298.514: 44.6404% ( 148) 00:12:18.773 10298.514 - 10360.930: 45.8537% ( 139) 00:12:18.773 10360.930 - 10423.345: 47.1107% ( 144) 00:12:18.773 10423.345 - 10485.760: 48.2629% ( 132) 00:12:18.773 10485.760 - 10548.175: 49.4064% ( 131) 00:12:18.773 10548.175 - 10610.590: 50.5150% ( 127) 00:12:18.773 10610.590 - 10673.006: 51.8156% ( 149) 00:12:18.773 10673.006 - 10735.421: 53.2297% ( 162) 00:12:18.773 10735.421 - 10797.836: 54.5828% ( 155) 00:12:18.773 10797.836 - 10860.251: 55.9707% ( 159) 00:12:18.773 10860.251 - 10922.667: 57.2626% ( 148) 00:12:18.773 10922.667 - 10985.082: 58.4323% ( 134) 00:12:18.773 10985.082 - 11047.497: 59.5321% ( 126) 00:12:18.773 11047.497 - 11109.912: 60.6320% ( 126) 00:12:18.773 11109.912 - 11172.328: 61.7318% ( 126) 00:12:18.773 11172.328 - 11234.743: 62.8142% ( 124) 00:12:18.773 11234.743 - 11297.158: 63.8705% ( 121) 00:12:18.773 11297.158 - 11359.573: 64.9529% ( 124) 00:12:18.773 11359.573 - 11421.989: 66.0440% ( 125) 00:12:18.773 11421.989 - 11484.404: 66.9605% ( 105) 00:12:18.773 11484.404 - 11546.819: 67.8946% ( 107) 00:12:18.773 11546.819 - 11609.234: 68.8198% ( 106) 00:12:18.773 11609.234 - 11671.650: 69.7713% ( 109) 00:12:18.773 11671.650 - 11734.065: 70.7140% ( 108) 00:12:18.773 11734.065 - 11796.480: 71.5957% ( 101) 00:12:18.773 11796.480 - 11858.895: 72.3900% ( 91) 00:12:18.773 11858.895 - 11921.310: 73.1233% ( 84) 00:12:18.773 11921.310 - 11983.726: 73.6994% ( 66) 00:12:18.773 11983.726 - 12046.141: 74.3628% ( 76) 00:12:18.773 12046.141 - 12108.556: 75.0000% ( 73) 00:12:18.773 12108.556 - 12170.971: 75.7158% ( 82) 00:12:18.773 12170.971 - 12233.387: 76.4403% ( 83) 00:12:18.773 12233.387 - 12295.802: 77.2434% ( 92) 00:12:18.773 12295.802 - 12358.217: 77.9941% ( 86) 00:12:18.773 12358.217 - 12420.632: 78.6749% ( 78) 00:12:18.773 12420.632 - 12483.048: 79.4256% ( 86) 00:12:18.773 12483.048 - 12545.463: 80.0716% ( 74) 00:12:18.773 12545.463 - 12607.878: 80.6826% ( 70) 00:12:18.773 12607.878 - 12670.293: 81.3111% ( 72) 00:12:18.773 12670.293 - 12732.709: 81.9832% ( 77) 00:12:18.773 12732.709 - 12795.124: 82.6030% ( 71) 00:12:18.773 12795.124 - 12857.539: 83.2926% ( 79) 00:12:18.773 12857.539 - 12919.954: 83.9473% ( 75) 00:12:18.773 12919.954 - 12982.370: 84.5932% ( 74) 00:12:18.773 12982.370 - 13044.785: 85.2130% ( 71) 00:12:18.773 13044.785 - 13107.200: 85.8415% ( 72) 00:12:18.773 13107.200 - 13169.615: 86.4612% ( 71) 00:12:18.773 13169.615 - 13232.030: 87.0112% ( 63) 00:12:18.773 13232.030 - 13294.446: 87.6222% ( 70) 00:12:18.773 13294.446 - 13356.861: 88.2682% ( 74) 00:12:18.773 13356.861 - 13419.276: 88.8530% ( 67) 00:12:18.773 13419.276 - 13481.691: 89.4466% ( 68) 00:12:18.773 13481.691 - 13544.107: 89.9878% ( 62) 00:12:18.773 13544.107 - 13606.522: 90.5377% ( 63) 00:12:18.773 13606.522 - 13668.937: 91.1226% ( 67) 00:12:18.774 13668.937 - 13731.352: 91.6376% ( 59) 00:12:18.774 13731.352 - 13793.768: 92.1700% ( 61) 00:12:18.774 13793.768 - 13856.183: 92.7025% ( 61) 00:12:18.774 13856.183 - 13918.598: 93.2175% ( 59) 00:12:18.774 13918.598 - 13981.013: 93.6889% ( 54) 00:12:18.774 13981.013 - 14043.429: 94.0992% ( 47) 00:12:18.774 14043.429 - 14105.844: 94.4745% ( 43) 00:12:18.774 14105.844 - 14168.259: 94.8062% ( 38) 00:12:18.774 14168.259 - 14230.674: 95.1816% ( 43) 00:12:18.774 14230.674 - 14293.090: 95.5482% ( 42) 00:12:18.774 14293.090 - 14355.505: 95.9148% ( 42) 00:12:18.774 14355.505 - 14417.920: 96.2727% ( 41) 00:12:18.774 14417.920 - 14480.335: 96.6131% ( 39) 00:12:18.774 14480.335 - 14542.750: 96.8837% ( 31) 00:12:18.774 14542.750 - 14605.166: 97.0583% ( 20) 00:12:18.774 14605.166 - 14667.581: 97.2067% ( 17) 00:12:18.774 14667.581 - 14729.996: 97.3202% ( 13) 00:12:18.774 14729.996 - 14792.411: 97.4075% ( 10) 00:12:18.774 14792.411 - 14854.827: 97.4598% ( 6) 00:12:18.774 14854.827 - 14917.242: 97.5209% ( 7) 00:12:18.774 14917.242 - 14979.657: 97.5821% ( 7) 00:12:18.774 14979.657 - 15042.072: 97.6432% ( 7) 00:12:18.774 15042.072 - 15104.488: 97.6868% ( 5) 00:12:18.774 15104.488 - 15166.903: 97.7217% ( 4) 00:12:18.774 15166.903 - 15229.318: 97.7566% ( 4) 00:12:18.774 15229.318 - 15291.733: 97.7916% ( 4) 00:12:18.774 15291.733 - 15354.149: 97.8352% ( 5) 00:12:18.774 15354.149 - 15416.564: 97.8788% ( 5) 00:12:18.774 15416.564 - 15478.979: 97.9225% ( 5) 00:12:18.774 15478.979 - 15541.394: 97.9661% ( 5) 00:12:18.774 15541.394 - 15603.810: 98.0010% ( 4) 00:12:18.774 15603.810 - 15666.225: 98.0534% ( 6) 00:12:18.774 15666.225 - 15728.640: 98.0971% ( 5) 00:12:18.774 15728.640 - 15791.055: 98.1320% ( 4) 00:12:18.774 15791.055 - 15853.470: 98.1756% ( 5) 00:12:18.774 15853.470 - 15915.886: 98.2193% ( 5) 00:12:18.774 15915.886 - 15978.301: 98.2629% ( 5) 00:12:18.774 15978.301 - 16103.131: 98.3502% ( 10) 00:12:18.774 16103.131 - 16227.962: 98.4375% ( 10) 00:12:18.774 16227.962 - 16352.792: 98.5161% ( 9) 00:12:18.774 16352.792 - 16477.623: 98.5946% ( 9) 00:12:18.774 16477.623 - 16602.453: 98.6819% ( 10) 00:12:18.774 16602.453 - 16727.284: 98.7517% ( 8) 00:12:18.774 16727.284 - 16852.114: 98.7954% ( 5) 00:12:18.774 16852.114 - 16976.945: 98.8216% ( 3) 00:12:18.774 16976.945 - 17101.775: 98.8652% ( 5) 00:12:18.774 17101.775 - 17226.606: 98.8827% ( 2) 00:12:18.774 30708.297 - 30833.128: 98.9001% ( 2) 00:12:18.774 30833.128 - 30957.958: 98.9176% ( 2) 00:12:18.774 30957.958 - 31082.789: 98.9438% ( 3) 00:12:18.774 31082.789 - 31207.619: 98.9612% ( 2) 00:12:18.774 31207.619 - 31332.450: 98.9874% ( 3) 00:12:18.774 31332.450 - 31457.280: 99.0136% ( 3) 00:12:18.774 31457.280 - 31582.110: 99.0311% ( 2) 00:12:18.774 31582.110 - 31706.941: 99.0573% ( 3) 00:12:18.774 31706.941 - 31831.771: 99.0834% ( 3) 00:12:18.774 31831.771 - 31956.602: 99.1096% ( 3) 00:12:18.774 31956.602 - 32206.263: 99.1620% ( 6) 00:12:18.774 32206.263 - 32455.924: 99.2057% ( 5) 00:12:18.774 32455.924 - 32705.585: 99.2580% ( 6) 00:12:18.774 32705.585 - 32955.246: 99.3104% ( 6) 00:12:18.774 32955.246 - 33204.907: 99.3541% ( 5) 00:12:18.774 33204.907 - 33454.568: 99.3977% ( 5) 00:12:18.774 33454.568 - 33704.229: 99.4413% ( 5) 00:12:18.774 38947.109 - 39196.770: 99.4501% ( 1) 00:12:18.774 39196.770 - 39446.430: 99.4937% ( 5) 00:12:18.774 39446.430 - 39696.091: 99.5374% ( 5) 00:12:18.774 39696.091 - 39945.752: 99.5810% ( 5) 00:12:18.774 39945.752 - 40195.413: 99.6247% ( 5) 00:12:18.774 40195.413 - 40445.074: 99.6683% ( 5) 00:12:18.774 40445.074 - 40694.735: 99.7032% ( 4) 00:12:18.774 40694.735 - 40944.396: 99.7556% ( 6) 00:12:18.774 40944.396 - 41194.057: 99.7992% ( 5) 00:12:18.774 41194.057 - 41443.718: 99.8516% ( 6) 00:12:18.774 41443.718 - 41693.379: 99.9040% ( 6) 00:12:18.774 41693.379 - 41943.040: 99.9476% ( 5) 00:12:18.774 41943.040 - 42192.701: 99.9825% ( 4) 00:12:18.774 42192.701 - 42442.362: 100.0000% ( 2) 00:12:18.774 00:12:18.774 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:18.774 ============================================================================== 00:12:18.774 Range in us Cumulative IO count 00:12:18.774 8176.396 - 8238.811: 0.0960% ( 11) 00:12:18.774 8238.811 - 8301.227: 0.3492% ( 29) 00:12:18.774 8301.227 - 8363.642: 0.8380% ( 56) 00:12:18.774 8363.642 - 8426.057: 1.4665% ( 72) 00:12:18.774 8426.057 - 8488.472: 2.3307% ( 99) 00:12:18.774 8488.472 - 8550.888: 3.2647% ( 107) 00:12:18.774 8550.888 - 8613.303: 4.3820% ( 128) 00:12:18.774 8613.303 - 8675.718: 5.5517% ( 134) 00:12:18.774 8675.718 - 8738.133: 6.6865% ( 130) 00:12:18.774 8738.133 - 8800.549: 8.0569% ( 157) 00:12:18.774 8800.549 - 8862.964: 9.4623% ( 161) 00:12:18.774 8862.964 - 8925.379: 10.9462% ( 170) 00:12:18.774 8925.379 - 8987.794: 12.7619% ( 208) 00:12:18.774 8987.794 - 9050.210: 14.5601% ( 206) 00:12:18.774 9050.210 - 9112.625: 16.4804% ( 220) 00:12:18.774 9112.625 - 9175.040: 18.3834% ( 218) 00:12:18.774 9175.040 - 9237.455: 20.0943% ( 196) 00:12:18.774 9237.455 - 9299.870: 21.6393% ( 177) 00:12:18.774 9299.870 - 9362.286: 23.0447% ( 161) 00:12:18.774 9362.286 - 9424.701: 24.3802% ( 153) 00:12:18.774 9424.701 - 9487.116: 25.6983% ( 151) 00:12:18.774 9487.116 - 9549.531: 27.0426% ( 154) 00:12:18.774 9549.531 - 9611.947: 28.4742% ( 164) 00:12:18.774 9611.947 - 9674.362: 29.9057% ( 164) 00:12:18.774 9674.362 - 9736.777: 31.5730% ( 191) 00:12:18.774 9736.777 - 9799.192: 33.2315% ( 190) 00:12:18.774 9799.192 - 9861.608: 34.8115% ( 181) 00:12:18.774 9861.608 - 9924.023: 36.3652% ( 178) 00:12:18.774 9924.023 - 9986.438: 37.8055% ( 165) 00:12:18.774 9986.438 - 10048.853: 39.2284% ( 163) 00:12:18.774 10048.853 - 10111.269: 40.7210% ( 171) 00:12:18.774 10111.269 - 10173.684: 42.0304% ( 150) 00:12:18.774 10173.684 - 10236.099: 43.2699% ( 142) 00:12:18.774 10236.099 - 10298.514: 44.5880% ( 151) 00:12:18.774 10298.514 - 10360.930: 45.7664% ( 135) 00:12:18.774 10360.930 - 10423.345: 47.1194% ( 155) 00:12:18.774 10423.345 - 10485.760: 48.3939% ( 146) 00:12:18.774 10485.760 - 10548.175: 49.5723% ( 135) 00:12:18.774 10548.175 - 10610.590: 50.7682% ( 137) 00:12:18.774 10610.590 - 10673.006: 51.9728% ( 138) 00:12:18.774 10673.006 - 10735.421: 53.2036% ( 141) 00:12:18.774 10735.421 - 10797.836: 54.4955% ( 148) 00:12:18.774 10797.836 - 10860.251: 55.8572% ( 156) 00:12:18.774 10860.251 - 10922.667: 57.1491% ( 148) 00:12:18.774 10922.667 - 10985.082: 58.3101% ( 133) 00:12:18.774 10985.082 - 11047.497: 59.4536% ( 131) 00:12:18.774 11047.497 - 11109.912: 60.5098% ( 121) 00:12:18.774 11109.912 - 11172.328: 61.6009% ( 125) 00:12:18.774 11172.328 - 11234.743: 62.6659% ( 122) 00:12:18.774 11234.743 - 11297.158: 63.7133% ( 120) 00:12:18.774 11297.158 - 11359.573: 64.7259% ( 116) 00:12:18.774 11359.573 - 11421.989: 65.7821% ( 121) 00:12:18.775 11421.989 - 11484.404: 66.7510% ( 111) 00:12:18.775 11484.404 - 11546.819: 67.7200% ( 111) 00:12:18.775 11546.819 - 11609.234: 68.6889% ( 111) 00:12:18.775 11609.234 - 11671.650: 69.6054% ( 105) 00:12:18.775 11671.650 - 11734.065: 70.3998% ( 91) 00:12:18.775 11734.065 - 11796.480: 71.3600% ( 110) 00:12:18.775 11796.480 - 11858.895: 72.1805% ( 94) 00:12:18.775 11858.895 - 11921.310: 73.0185% ( 96) 00:12:18.775 11921.310 - 11983.726: 73.7954% ( 89) 00:12:18.775 11983.726 - 12046.141: 74.4763% ( 78) 00:12:18.775 12046.141 - 12108.556: 75.1571% ( 78) 00:12:18.775 12108.556 - 12170.971: 75.8205% ( 76) 00:12:18.775 12170.971 - 12233.387: 76.5712% ( 86) 00:12:18.775 12233.387 - 12295.802: 77.2957% ( 83) 00:12:18.775 12295.802 - 12358.217: 77.9853% ( 79) 00:12:18.775 12358.217 - 12420.632: 78.7273% ( 85) 00:12:18.775 12420.632 - 12483.048: 79.4431% ( 82) 00:12:18.775 12483.048 - 12545.463: 80.1676% ( 83) 00:12:18.775 12545.463 - 12607.878: 80.8485% ( 78) 00:12:18.775 12607.878 - 12670.293: 81.5992% ( 86) 00:12:18.775 12670.293 - 12732.709: 82.2888% ( 79) 00:12:18.775 12732.709 - 12795.124: 82.9522% ( 76) 00:12:18.775 12795.124 - 12857.539: 83.6068% ( 75) 00:12:18.775 12857.539 - 12919.954: 84.2877% ( 78) 00:12:18.775 12919.954 - 12982.370: 84.8551% ( 65) 00:12:18.775 12982.370 - 13044.785: 85.4050% ( 63) 00:12:18.775 13044.785 - 13107.200: 85.9462% ( 62) 00:12:18.775 13107.200 - 13169.615: 86.4176% ( 54) 00:12:18.775 13169.615 - 13232.030: 86.9064% ( 56) 00:12:18.775 13232.030 - 13294.446: 87.4040% ( 57) 00:12:18.775 13294.446 - 13356.861: 87.9190% ( 59) 00:12:18.775 13356.861 - 13419.276: 88.4515% ( 61) 00:12:18.775 13419.276 - 13481.691: 89.0363% ( 67) 00:12:18.775 13481.691 - 13544.107: 89.6037% ( 65) 00:12:18.775 13544.107 - 13606.522: 90.1449% ( 62) 00:12:18.775 13606.522 - 13668.937: 90.6861% ( 62) 00:12:18.775 13668.937 - 13731.352: 91.1575% ( 54) 00:12:18.775 13731.352 - 13793.768: 91.6463% ( 56) 00:12:18.775 13793.768 - 13856.183: 92.1002% ( 52) 00:12:18.775 13856.183 - 13918.598: 92.5628% ( 53) 00:12:18.775 13918.598 - 13981.013: 92.9993% ( 50) 00:12:18.775 13981.013 - 14043.429: 93.4707% ( 54) 00:12:18.775 14043.429 - 14105.844: 93.8635% ( 45) 00:12:18.775 14105.844 - 14168.259: 94.2737% ( 47) 00:12:18.775 14168.259 - 14230.674: 94.7277% ( 52) 00:12:18.775 14230.674 - 14293.090: 95.0943% ( 42) 00:12:18.775 14293.090 - 14355.505: 95.4609% ( 42) 00:12:18.775 14355.505 - 14417.920: 95.8450% ( 44) 00:12:18.775 14417.920 - 14480.335: 96.1592% ( 36) 00:12:18.775 14480.335 - 14542.750: 96.4385% ( 32) 00:12:18.775 14542.750 - 14605.166: 96.6742% ( 27) 00:12:18.775 14605.166 - 14667.581: 96.8052% ( 15) 00:12:18.775 14667.581 - 14729.996: 96.9274% ( 14) 00:12:18.775 14729.996 - 14792.411: 97.0583% ( 15) 00:12:18.775 14792.411 - 14854.827: 97.1892% ( 15) 00:12:18.775 14854.827 - 14917.242: 97.3027% ( 13) 00:12:18.775 14917.242 - 14979.657: 97.4337% ( 15) 00:12:18.775 14979.657 - 15042.072: 97.5297% ( 11) 00:12:18.775 15042.072 - 15104.488: 97.5995% ( 8) 00:12:18.775 15104.488 - 15166.903: 97.6606% ( 7) 00:12:18.775 15166.903 - 15229.318: 97.6955% ( 4) 00:12:18.775 15229.318 - 15291.733: 97.7392% ( 5) 00:12:18.775 15291.733 - 15354.149: 97.7916% ( 6) 00:12:18.775 15354.149 - 15416.564: 97.8352% ( 5) 00:12:18.775 15416.564 - 15478.979: 97.8701% ( 4) 00:12:18.775 15478.979 - 15541.394: 97.9138% ( 5) 00:12:18.775 15541.394 - 15603.810: 97.9574% ( 5) 00:12:18.775 15603.810 - 15666.225: 97.9923% ( 4) 00:12:18.775 15666.225 - 15728.640: 98.0447% ( 6) 00:12:18.775 15728.640 - 15791.055: 98.0883% ( 5) 00:12:18.775 15791.055 - 15853.470: 98.1320% ( 5) 00:12:18.775 15853.470 - 15915.886: 98.1669% ( 4) 00:12:18.775 15915.886 - 15978.301: 98.2105% ( 5) 00:12:18.775 15978.301 - 16103.131: 98.2804% ( 8) 00:12:18.775 16103.131 - 16227.962: 98.3589% ( 9) 00:12:18.775 16227.962 - 16352.792: 98.4288% ( 8) 00:12:18.775 16352.792 - 16477.623: 98.4986% ( 8) 00:12:18.775 16477.623 - 16602.453: 98.5597% ( 7) 00:12:18.775 16602.453 - 16727.284: 98.5946% ( 4) 00:12:18.775 16727.284 - 16852.114: 98.6295% ( 4) 00:12:18.775 16852.114 - 16976.945: 98.6645% ( 4) 00:12:18.775 16976.945 - 17101.775: 98.7081% ( 5) 00:12:18.775 17101.775 - 17226.606: 98.7430% ( 4) 00:12:18.775 17226.606 - 17351.436: 98.7692% ( 3) 00:12:18.775 17351.436 - 17476.267: 98.8041% ( 4) 00:12:18.775 17476.267 - 17601.097: 98.8303% ( 3) 00:12:18.775 17601.097 - 17725.928: 98.8652% ( 4) 00:12:18.775 17725.928 - 17850.758: 98.8827% ( 2) 00:12:18.775 27337.874 - 27462.705: 98.9001% ( 2) 00:12:18.775 27462.705 - 27587.535: 98.9263% ( 3) 00:12:18.775 27587.535 - 27712.366: 98.9525% ( 3) 00:12:18.775 27712.366 - 27837.196: 98.9787% ( 3) 00:12:18.775 27837.196 - 27962.027: 98.9962% ( 2) 00:12:18.775 27962.027 - 28086.857: 99.0136% ( 2) 00:12:18.775 28086.857 - 28211.688: 99.0398% ( 3) 00:12:18.775 28211.688 - 28336.518: 99.0660% ( 3) 00:12:18.775 28336.518 - 28461.349: 99.0922% ( 3) 00:12:18.775 28461.349 - 28586.179: 99.1096% ( 2) 00:12:18.775 28586.179 - 28711.010: 99.1358% ( 3) 00:12:18.775 28711.010 - 28835.840: 99.1620% ( 3) 00:12:18.775 28835.840 - 28960.670: 99.1882% ( 3) 00:12:18.775 28960.670 - 29085.501: 99.2144% ( 3) 00:12:18.775 29085.501 - 29210.331: 99.2318% ( 2) 00:12:18.775 29210.331 - 29335.162: 99.2580% ( 3) 00:12:18.775 29335.162 - 29459.992: 99.2842% ( 3) 00:12:18.775 29459.992 - 29584.823: 99.3104% ( 3) 00:12:18.775 29584.823 - 29709.653: 99.3366% ( 3) 00:12:18.775 29709.653 - 29834.484: 99.3541% ( 2) 00:12:18.775 29834.484 - 29959.314: 99.3628% ( 1) 00:12:18.775 29959.314 - 30084.145: 99.3890% ( 3) 00:12:18.775 30084.145 - 30208.975: 99.4152% ( 3) 00:12:18.775 30208.975 - 30333.806: 99.4326% ( 2) 00:12:18.775 30333.806 - 30458.636: 99.4413% ( 1) 00:12:18.775 35701.516 - 35951.177: 99.4501% ( 1) 00:12:18.775 35951.177 - 36200.838: 99.4937% ( 5) 00:12:18.775 36200.838 - 36450.499: 99.5374% ( 5) 00:12:18.775 36450.499 - 36700.160: 99.5810% ( 5) 00:12:18.775 36700.160 - 36949.821: 99.6334% ( 6) 00:12:18.775 36949.821 - 37199.482: 99.6858% ( 6) 00:12:18.775 37199.482 - 37449.143: 99.7381% ( 6) 00:12:18.775 37449.143 - 37698.804: 99.7818% ( 5) 00:12:18.775 37698.804 - 37948.465: 99.8254% ( 5) 00:12:18.775 37948.465 - 38198.126: 99.8778% ( 6) 00:12:18.775 38198.126 - 38447.787: 99.9302% ( 6) 00:12:18.775 38447.787 - 38697.448: 99.9738% ( 5) 00:12:18.775 38697.448 - 38947.109: 100.0000% ( 3) 00:12:18.775 00:12:18.775 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:18.775 ============================================================================== 00:12:18.775 Range in us Cumulative IO count 00:12:18.775 8176.396 - 8238.811: 0.1397% ( 16) 00:12:18.775 8238.811 - 8301.227: 0.3666% ( 26) 00:12:18.775 8301.227 - 8363.642: 0.7856% ( 48) 00:12:18.775 8363.642 - 8426.057: 1.4403% ( 75) 00:12:18.775 8426.057 - 8488.472: 2.2870% ( 97) 00:12:18.775 8488.472 - 8550.888: 3.2036% ( 105) 00:12:18.775 8550.888 - 8613.303: 4.3820% ( 135) 00:12:18.775 8613.303 - 8675.718: 5.4993% ( 128) 00:12:18.775 8675.718 - 8738.133: 6.6777% ( 135) 00:12:18.775 8738.133 - 8800.549: 8.0133% ( 153) 00:12:18.776 8800.549 - 8862.964: 9.4885% ( 169) 00:12:18.776 8862.964 - 8925.379: 11.0248% ( 176) 00:12:18.776 8925.379 - 8987.794: 12.6659% ( 188) 00:12:18.776 8987.794 - 9050.210: 14.4640% ( 206) 00:12:18.776 9050.210 - 9112.625: 16.3757% ( 219) 00:12:18.776 9112.625 - 9175.040: 18.3310% ( 224) 00:12:18.776 9175.040 - 9237.455: 20.2078% ( 215) 00:12:18.776 9237.455 - 9299.870: 21.8750% ( 191) 00:12:18.776 9299.870 - 9362.286: 23.3153% ( 165) 00:12:18.776 9362.286 - 9424.701: 24.8167% ( 172) 00:12:18.776 9424.701 - 9487.116: 26.2395% ( 163) 00:12:18.776 9487.116 - 9549.531: 27.6362% ( 160) 00:12:18.776 9549.531 - 9611.947: 29.1027% ( 168) 00:12:18.776 9611.947 - 9674.362: 30.6477% ( 177) 00:12:18.776 9674.362 - 9736.777: 32.2451% ( 183) 00:12:18.776 9736.777 - 9799.192: 33.8862% ( 188) 00:12:18.776 9799.192 - 9861.608: 35.4487% ( 179) 00:12:18.776 9861.608 - 9924.023: 36.9675% ( 174) 00:12:18.776 9924.023 - 9986.438: 38.4166% ( 166) 00:12:18.776 9986.438 - 10048.853: 39.8045% ( 159) 00:12:18.776 10048.853 - 10111.269: 41.1575% ( 155) 00:12:18.776 10111.269 - 10173.684: 42.4406% ( 147) 00:12:18.776 10173.684 - 10236.099: 43.6802% ( 142) 00:12:18.776 10236.099 - 10298.514: 44.7538% ( 123) 00:12:18.776 10298.514 - 10360.930: 45.8886% ( 130) 00:12:18.776 10360.930 - 10423.345: 47.0496% ( 133) 00:12:18.776 10423.345 - 10485.760: 48.2804% ( 141) 00:12:18.776 10485.760 - 10548.175: 49.4501% ( 134) 00:12:18.776 10548.175 - 10610.590: 50.5674% ( 128) 00:12:18.776 10610.590 - 10673.006: 51.6847% ( 128) 00:12:18.776 10673.006 - 10735.421: 52.8806% ( 137) 00:12:18.776 10735.421 - 10797.836: 54.1463% ( 145) 00:12:18.776 10797.836 - 10860.251: 55.3422% ( 137) 00:12:18.776 10860.251 - 10922.667: 56.5031% ( 133) 00:12:18.776 10922.667 - 10985.082: 57.6030% ( 126) 00:12:18.776 10985.082 - 11047.497: 58.7029% ( 126) 00:12:18.776 11047.497 - 11109.912: 59.8551% ( 132) 00:12:18.776 11109.912 - 11172.328: 61.0335% ( 135) 00:12:18.776 11172.328 - 11234.743: 62.0985% ( 122) 00:12:18.776 11234.743 - 11297.158: 63.1547% ( 121) 00:12:18.776 11297.158 - 11359.573: 64.2371% ( 124) 00:12:18.776 11359.573 - 11421.989: 65.2846% ( 120) 00:12:18.776 11421.989 - 11484.404: 66.3059% ( 117) 00:12:18.776 11484.404 - 11546.819: 67.3359% ( 118) 00:12:18.776 11546.819 - 11609.234: 68.4096% ( 123) 00:12:18.776 11609.234 - 11671.650: 69.4221% ( 116) 00:12:18.776 11671.650 - 11734.065: 70.3300% ( 104) 00:12:18.776 11734.065 - 11796.480: 71.2814% ( 109) 00:12:18.776 11796.480 - 11858.895: 72.1631% ( 101) 00:12:18.776 11858.895 - 11921.310: 73.0185% ( 98) 00:12:18.776 11921.310 - 11983.726: 73.7343% ( 82) 00:12:18.776 11983.726 - 12046.141: 74.4239% ( 79) 00:12:18.776 12046.141 - 12108.556: 75.0349% ( 70) 00:12:18.776 12108.556 - 12170.971: 75.7332% ( 80) 00:12:18.776 12170.971 - 12233.387: 76.4839% ( 86) 00:12:18.776 12233.387 - 12295.802: 77.3132% ( 95) 00:12:18.776 12295.802 - 12358.217: 77.9330% ( 71) 00:12:18.776 12358.217 - 12420.632: 78.6575% ( 83) 00:12:18.776 12420.632 - 12483.048: 79.4082% ( 86) 00:12:18.776 12483.048 - 12545.463: 80.1851% ( 89) 00:12:18.776 12545.463 - 12607.878: 80.8572% ( 77) 00:12:18.776 12607.878 - 12670.293: 81.4857% ( 72) 00:12:18.776 12670.293 - 12732.709: 82.1142% ( 72) 00:12:18.776 12732.709 - 12795.124: 82.6903% ( 66) 00:12:18.776 12795.124 - 12857.539: 83.3624% ( 77) 00:12:18.776 12857.539 - 12919.954: 83.9298% ( 65) 00:12:18.776 12919.954 - 12982.370: 84.4972% ( 65) 00:12:18.776 12982.370 - 13044.785: 85.0471% ( 63) 00:12:18.776 13044.785 - 13107.200: 85.5971% ( 63) 00:12:18.776 13107.200 - 13169.615: 86.1034% ( 58) 00:12:18.776 13169.615 - 13232.030: 86.6271% ( 60) 00:12:18.776 13232.030 - 13294.446: 87.2032% ( 66) 00:12:18.776 13294.446 - 13356.861: 87.8055% ( 69) 00:12:18.776 13356.861 - 13419.276: 88.4253% ( 71) 00:12:18.776 13419.276 - 13481.691: 88.9403% ( 59) 00:12:18.776 13481.691 - 13544.107: 89.5513% ( 70) 00:12:18.776 13544.107 - 13606.522: 90.0227% ( 54) 00:12:18.776 13606.522 - 13668.937: 90.5028% ( 55) 00:12:18.776 13668.937 - 13731.352: 90.9567% ( 52) 00:12:18.776 13731.352 - 13793.768: 91.4106% ( 52) 00:12:18.776 13793.768 - 13856.183: 91.8558% ( 51) 00:12:18.776 13856.183 - 13918.598: 92.3359% ( 55) 00:12:18.776 13918.598 - 13981.013: 92.7549% ( 48) 00:12:18.776 13981.013 - 14043.429: 93.2350% ( 55) 00:12:18.776 14043.429 - 14105.844: 93.6278% ( 45) 00:12:18.776 14105.844 - 14168.259: 94.0555% ( 49) 00:12:18.776 14168.259 - 14230.674: 94.4920% ( 50) 00:12:18.776 14230.674 - 14293.090: 94.9197% ( 49) 00:12:18.776 14293.090 - 14355.505: 95.3212% ( 46) 00:12:18.776 14355.505 - 14417.920: 95.7402% ( 48) 00:12:18.776 14417.920 - 14480.335: 96.0894% ( 40) 00:12:18.776 14480.335 - 14542.750: 96.3687% ( 32) 00:12:18.776 14542.750 - 14605.166: 96.6131% ( 28) 00:12:18.776 14605.166 - 14667.581: 96.8314% ( 25) 00:12:18.776 14667.581 - 14729.996: 97.0321% ( 23) 00:12:18.776 14729.996 - 14792.411: 97.2154% ( 21) 00:12:18.776 14792.411 - 14854.827: 97.3638% ( 17) 00:12:18.776 14854.827 - 14917.242: 97.5035% ( 16) 00:12:18.776 14917.242 - 14979.657: 97.5995% ( 11) 00:12:18.776 14979.657 - 15042.072: 97.6693% ( 8) 00:12:18.776 15042.072 - 15104.488: 97.7130% ( 5) 00:12:18.776 15104.488 - 15166.903: 97.7479% ( 4) 00:12:18.776 15166.903 - 15229.318: 97.7828% ( 4) 00:12:18.777 15229.318 - 15291.733: 97.8265% ( 5) 00:12:18.777 15291.733 - 15354.149: 97.8527% ( 3) 00:12:18.777 15354.149 - 15416.564: 97.8963% ( 5) 00:12:18.777 15416.564 - 15478.979: 97.9312% ( 4) 00:12:18.777 15478.979 - 15541.394: 97.9661% ( 4) 00:12:18.777 15541.394 - 15603.810: 98.0010% ( 4) 00:12:18.777 15603.810 - 15666.225: 98.0360% ( 4) 00:12:18.777 15666.225 - 15728.640: 98.0796% ( 5) 00:12:18.777 15728.640 - 15791.055: 98.1233% ( 5) 00:12:18.777 15791.055 - 15853.470: 98.1669% ( 5) 00:12:18.777 15853.470 - 15915.886: 98.2105% ( 5) 00:12:18.777 15915.886 - 15978.301: 98.2367% ( 3) 00:12:18.777 15978.301 - 16103.131: 98.2804% ( 5) 00:12:18.777 16103.131 - 16227.962: 98.3153% ( 4) 00:12:18.777 16227.962 - 16352.792: 98.3328% ( 2) 00:12:18.777 16352.792 - 16477.623: 98.3764% ( 5) 00:12:18.777 16477.623 - 16602.453: 98.4026% ( 3) 00:12:18.777 16602.453 - 16727.284: 98.4462% ( 5) 00:12:18.777 16727.284 - 16852.114: 98.4811% ( 4) 00:12:18.777 16852.114 - 16976.945: 98.5161% ( 4) 00:12:18.777 16976.945 - 17101.775: 98.5510% ( 4) 00:12:18.777 17101.775 - 17226.606: 98.5859% ( 4) 00:12:18.777 17226.606 - 17351.436: 98.6121% ( 3) 00:12:18.777 17351.436 - 17476.267: 98.6470% ( 4) 00:12:18.777 17476.267 - 17601.097: 98.6819% ( 4) 00:12:18.777 17601.097 - 17725.928: 98.7168% ( 4) 00:12:18.777 17725.928 - 17850.758: 98.7430% ( 3) 00:12:18.777 17850.758 - 17975.589: 98.7779% ( 4) 00:12:18.777 17975.589 - 18100.419: 98.8128% ( 4) 00:12:18.777 18100.419 - 18225.250: 98.8390% ( 3) 00:12:18.777 18225.250 - 18350.080: 98.8740% ( 4) 00:12:18.777 18350.080 - 18474.910: 98.8827% ( 1) 00:12:18.777 23967.451 - 24092.282: 98.9001% ( 2) 00:12:18.777 24092.282 - 24217.112: 98.9176% ( 2) 00:12:18.777 24217.112 - 24341.943: 98.9438% ( 3) 00:12:18.777 24341.943 - 24466.773: 98.9612% ( 2) 00:12:18.777 24466.773 - 24591.604: 98.9874% ( 3) 00:12:18.777 24591.604 - 24716.434: 99.0136% ( 3) 00:12:18.777 24716.434 - 24841.265: 99.0398% ( 3) 00:12:18.777 24841.265 - 24966.095: 99.0573% ( 2) 00:12:18.777 24966.095 - 25090.926: 99.0834% ( 3) 00:12:18.777 25090.926 - 25215.756: 99.1096% ( 3) 00:12:18.777 25215.756 - 25340.587: 99.1271% ( 2) 00:12:18.777 25340.587 - 25465.417: 99.1533% ( 3) 00:12:18.777 25465.417 - 25590.248: 99.1795% ( 3) 00:12:18.777 25590.248 - 25715.078: 99.2057% ( 3) 00:12:18.777 25715.078 - 25839.909: 99.2231% ( 2) 00:12:18.777 25839.909 - 25964.739: 99.2493% ( 3) 00:12:18.777 25964.739 - 26089.570: 99.2755% ( 3) 00:12:18.777 26089.570 - 26214.400: 99.2929% ( 2) 00:12:18.777 26214.400 - 26339.230: 99.3279% ( 4) 00:12:18.777 26339.230 - 26464.061: 99.3453% ( 2) 00:12:18.777 26464.061 - 26588.891: 99.3628% ( 2) 00:12:18.777 26588.891 - 26713.722: 99.3890% ( 3) 00:12:18.777 26713.722 - 26838.552: 99.4152% ( 3) 00:12:18.777 26838.552 - 26963.383: 99.4326% ( 2) 00:12:18.777 26963.383 - 27088.213: 99.4413% ( 1) 00:12:18.777 32455.924 - 32705.585: 99.4850% ( 5) 00:12:18.777 32705.585 - 32955.246: 99.5286% ( 5) 00:12:18.777 32955.246 - 33204.907: 99.5723% ( 5) 00:12:18.777 33204.907 - 33454.568: 99.6072% ( 4) 00:12:18.777 33454.568 - 33704.229: 99.6596% ( 6) 00:12:18.777 33704.229 - 33953.890: 99.7032% ( 5) 00:12:18.777 33953.890 - 34203.550: 99.7469% ( 5) 00:12:18.777 34203.550 - 34453.211: 99.7992% ( 6) 00:12:18.777 34453.211 - 34702.872: 99.8429% ( 5) 00:12:18.777 34702.872 - 34952.533: 99.8865% ( 5) 00:12:18.777 34952.533 - 35202.194: 99.9389% ( 6) 00:12:18.777 35202.194 - 35451.855: 99.9825% ( 5) 00:12:18.777 35451.855 - 35701.516: 100.0000% ( 2) 00:12:18.777 00:12:18.777 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:18.777 ============================================================================== 00:12:18.777 Range in us Cumulative IO count 00:12:18.777 8113.981 - 8176.396: 0.0175% ( 2) 00:12:18.777 8176.396 - 8238.811: 0.1309% ( 13) 00:12:18.777 8238.811 - 8301.227: 0.3753% ( 28) 00:12:18.777 8301.227 - 8363.642: 0.9078% ( 61) 00:12:18.777 8363.642 - 8426.057: 1.5538% ( 74) 00:12:18.777 8426.057 - 8488.472: 2.3918% ( 96) 00:12:18.777 8488.472 - 8550.888: 3.3520% ( 110) 00:12:18.777 8550.888 - 8613.303: 4.5915% ( 142) 00:12:18.777 8613.303 - 8675.718: 5.7350% ( 131) 00:12:18.777 8675.718 - 8738.133: 7.0356% ( 149) 00:12:18.777 8738.133 - 8800.549: 8.3712% ( 153) 00:12:18.777 8800.549 - 8862.964: 9.8115% ( 165) 00:12:18.777 8862.964 - 8925.379: 11.2867% ( 169) 00:12:18.777 8925.379 - 8987.794: 12.8666% ( 181) 00:12:18.777 8987.794 - 9050.210: 14.6648% ( 206) 00:12:18.777 9050.210 - 9112.625: 16.4892% ( 209) 00:12:18.777 9112.625 - 9175.040: 18.4183% ( 221) 00:12:18.777 9175.040 - 9237.455: 20.1990% ( 204) 00:12:18.777 9237.455 - 9299.870: 21.9012% ( 195) 00:12:18.777 9299.870 - 9362.286: 23.4550% ( 178) 00:12:18.777 9362.286 - 9424.701: 24.9127% ( 167) 00:12:18.777 9424.701 - 9487.116: 26.2570% ( 154) 00:12:18.777 9487.116 - 9549.531: 27.6013% ( 154) 00:12:18.777 9549.531 - 9611.947: 29.0503% ( 166) 00:12:18.777 9611.947 - 9674.362: 30.5604% ( 173) 00:12:18.777 9674.362 - 9736.777: 32.2538% ( 194) 00:12:18.777 9736.777 - 9799.192: 33.7814% ( 175) 00:12:18.777 9799.192 - 9861.608: 35.2916% ( 173) 00:12:18.777 9861.608 - 9924.023: 36.8628% ( 180) 00:12:18.777 9924.023 - 9986.438: 38.3380% ( 169) 00:12:18.777 9986.438 - 10048.853: 39.7084% ( 157) 00:12:18.777 10048.853 - 10111.269: 40.9916% ( 147) 00:12:18.777 10111.269 - 10173.684: 42.2486% ( 144) 00:12:18.777 10173.684 - 10236.099: 43.4707% ( 140) 00:12:18.777 10236.099 - 10298.514: 44.7102% ( 142) 00:12:18.777 10298.514 - 10360.930: 45.9497% ( 142) 00:12:18.777 10360.930 - 10423.345: 47.2154% ( 145) 00:12:18.777 10423.345 - 10485.760: 48.3677% ( 132) 00:12:18.777 10485.760 - 10548.175: 49.4501% ( 124) 00:12:18.777 10548.175 - 10610.590: 50.5412% ( 125) 00:12:18.777 10610.590 - 10673.006: 51.7109% ( 134) 00:12:18.777 10673.006 - 10735.421: 52.8544% ( 131) 00:12:18.777 10735.421 - 10797.836: 54.0503% ( 137) 00:12:18.777 10797.836 - 10860.251: 55.2287% ( 135) 00:12:18.777 10860.251 - 10922.667: 56.3111% ( 124) 00:12:18.777 10922.667 - 10985.082: 57.4372% ( 129) 00:12:18.777 10985.082 - 11047.497: 58.5545% ( 128) 00:12:18.777 11047.497 - 11109.912: 59.7067% ( 132) 00:12:18.777 11109.912 - 11172.328: 60.8415% ( 130) 00:12:18.777 11172.328 - 11234.743: 62.0112% ( 134) 00:12:18.777 11234.743 - 11297.158: 63.1459% ( 130) 00:12:18.777 11297.158 - 11359.573: 64.2807% ( 130) 00:12:18.777 11359.573 - 11421.989: 65.4068% ( 129) 00:12:18.777 11421.989 - 11484.404: 66.5416% ( 130) 00:12:18.777 11484.404 - 11546.819: 67.6851% ( 131) 00:12:18.777 11546.819 - 11609.234: 68.7500% ( 122) 00:12:18.777 11609.234 - 11671.650: 69.7451% ( 114) 00:12:18.777 11671.650 - 11734.065: 70.6791% ( 107) 00:12:18.777 11734.065 - 11796.480: 71.5258% ( 97) 00:12:18.777 11796.480 - 11858.895: 72.3289% ( 92) 00:12:18.777 11858.895 - 11921.310: 73.0709% ( 85) 00:12:18.777 11921.310 - 11983.726: 73.7692% ( 80) 00:12:18.777 11983.726 - 12046.141: 74.4675% ( 80) 00:12:18.777 12046.141 - 12108.556: 75.1659% ( 80) 00:12:18.777 12108.556 - 12170.971: 75.8642% ( 80) 00:12:18.777 12170.971 - 12233.387: 76.6847% ( 94) 00:12:18.777 12233.387 - 12295.802: 77.4441% ( 87) 00:12:18.777 12295.802 - 12358.217: 78.2036% ( 87) 00:12:18.777 12358.217 - 12420.632: 78.8146% ( 70) 00:12:18.777 12420.632 - 12483.048: 79.5216% ( 81) 00:12:18.778 12483.048 - 12545.463: 80.1240% ( 69) 00:12:18.778 12545.463 - 12607.878: 80.7961% ( 77) 00:12:18.778 12607.878 - 12670.293: 81.4246% ( 72) 00:12:18.778 12670.293 - 12732.709: 82.0705% ( 74) 00:12:18.778 12732.709 - 12795.124: 82.6466% ( 66) 00:12:18.778 12795.124 - 12857.539: 83.2751% ( 72) 00:12:18.778 12857.539 - 12919.954: 83.8338% ( 64) 00:12:18.778 12919.954 - 12982.370: 84.3488% ( 59) 00:12:18.778 12982.370 - 13044.785: 84.8900% ( 62) 00:12:18.778 13044.785 - 13107.200: 85.3876% ( 57) 00:12:18.778 13107.200 - 13169.615: 85.9637% ( 66) 00:12:18.778 13169.615 - 13232.030: 86.5311% ( 65) 00:12:18.778 13232.030 - 13294.446: 87.1596% ( 72) 00:12:18.778 13294.446 - 13356.861: 87.7182% ( 64) 00:12:18.778 13356.861 - 13419.276: 88.3642% ( 74) 00:12:18.778 13419.276 - 13481.691: 88.9839% ( 71) 00:12:18.778 13481.691 - 13544.107: 89.5077% ( 60) 00:12:18.778 13544.107 - 13606.522: 89.9965% ( 56) 00:12:18.778 13606.522 - 13668.937: 90.4591% ( 53) 00:12:18.778 13668.937 - 13731.352: 90.9480% ( 56) 00:12:18.778 13731.352 - 13793.768: 91.4106% ( 53) 00:12:18.778 13793.768 - 13856.183: 91.8471% ( 50) 00:12:18.778 13856.183 - 13918.598: 92.2399% ( 45) 00:12:18.778 13918.598 - 13981.013: 92.6501% ( 47) 00:12:18.778 13981.013 - 14043.429: 93.0604% ( 47) 00:12:18.778 14043.429 - 14105.844: 93.4270% ( 42) 00:12:18.778 14105.844 - 14168.259: 93.8635% ( 50) 00:12:18.778 14168.259 - 14230.674: 94.2999% ( 50) 00:12:18.778 14230.674 - 14293.090: 94.7189% ( 48) 00:12:18.778 14293.090 - 14355.505: 95.1292% ( 47) 00:12:18.778 14355.505 - 14417.920: 95.4696% ( 39) 00:12:18.778 14417.920 - 14480.335: 95.8362% ( 42) 00:12:18.778 14480.335 - 14542.750: 96.1505% ( 36) 00:12:18.778 14542.750 - 14605.166: 96.3600% ( 24) 00:12:18.778 14605.166 - 14667.581: 96.5346% ( 20) 00:12:18.778 14667.581 - 14729.996: 96.6917% ( 18) 00:12:18.778 14729.996 - 14792.411: 96.8663% ( 20) 00:12:18.778 14792.411 - 14854.827: 97.0409% ( 20) 00:12:18.778 14854.827 - 14917.242: 97.1805% ( 16) 00:12:18.778 14917.242 - 14979.657: 97.2853% ( 12) 00:12:18.778 14979.657 - 15042.072: 97.3813% ( 11) 00:12:18.778 15042.072 - 15104.488: 97.4686% ( 10) 00:12:18.778 15104.488 - 15166.903: 97.5559% ( 10) 00:12:18.778 15166.903 - 15229.318: 97.6082% ( 6) 00:12:18.778 15229.318 - 15291.733: 97.6693% ( 7) 00:12:18.778 15291.733 - 15354.149: 97.7217% ( 6) 00:12:18.778 15354.149 - 15416.564: 97.7828% ( 7) 00:12:18.778 15416.564 - 15478.979: 97.8265% ( 5) 00:12:18.778 15478.979 - 15541.394: 97.8876% ( 7) 00:12:18.778 15541.394 - 15603.810: 97.9399% ( 6) 00:12:18.778 15603.810 - 15666.225: 97.9923% ( 6) 00:12:18.778 15666.225 - 15728.640: 98.0447% ( 6) 00:12:18.778 15728.640 - 15791.055: 98.0883% ( 5) 00:12:18.778 15791.055 - 15853.470: 98.1320% ( 5) 00:12:18.778 15853.470 - 15915.886: 98.1669% ( 4) 00:12:18.778 15915.886 - 15978.301: 98.2018% ( 4) 00:12:18.778 15978.301 - 16103.131: 98.2455% ( 5) 00:12:18.778 16103.131 - 16227.962: 98.2978% ( 6) 00:12:18.778 16227.962 - 16352.792: 98.3240% ( 3) 00:12:18.778 16852.114 - 16976.945: 98.3502% ( 3) 00:12:18.778 16976.945 - 17101.775: 98.3851% ( 4) 00:12:18.778 17101.775 - 17226.606: 98.4200% ( 4) 00:12:18.778 17226.606 - 17351.436: 98.4550% ( 4) 00:12:18.778 17351.436 - 17476.267: 98.4899% ( 4) 00:12:18.778 17476.267 - 17601.097: 98.5248% ( 4) 00:12:18.778 17601.097 - 17725.928: 98.5597% ( 4) 00:12:18.778 17725.928 - 17850.758: 98.5859% ( 3) 00:12:18.778 17850.758 - 17975.589: 98.6208% ( 4) 00:12:18.778 17975.589 - 18100.419: 98.6557% ( 4) 00:12:18.778 18100.419 - 18225.250: 98.6819% ( 3) 00:12:18.778 18225.250 - 18350.080: 98.7168% ( 4) 00:12:18.778 18350.080 - 18474.910: 98.7517% ( 4) 00:12:18.778 18474.910 - 18599.741: 98.7867% ( 4) 00:12:18.778 18599.741 - 18724.571: 98.8128% ( 3) 00:12:18.778 18724.571 - 18849.402: 98.8478% ( 4) 00:12:18.778 18849.402 - 18974.232: 98.8827% ( 4) 00:12:18.778 20971.520 - 21096.350: 98.9001% ( 2) 00:12:18.778 21096.350 - 21221.181: 98.9176% ( 2) 00:12:18.778 21221.181 - 21346.011: 98.9438% ( 3) 00:12:18.778 21346.011 - 21470.842: 98.9787% ( 4) 00:12:18.778 21470.842 - 21595.672: 99.0049% ( 3) 00:12:18.778 21595.672 - 21720.503: 99.0223% ( 2) 00:12:18.778 21720.503 - 21845.333: 99.0485% ( 3) 00:12:18.778 21845.333 - 21970.164: 99.0660% ( 2) 00:12:18.778 21970.164 - 22094.994: 99.0834% ( 2) 00:12:18.778 22094.994 - 22219.825: 99.1096% ( 3) 00:12:18.778 22219.825 - 22344.655: 99.1271% ( 2) 00:12:18.778 22344.655 - 22469.486: 99.1533% ( 3) 00:12:18.778 22469.486 - 22594.316: 99.1795% ( 3) 00:12:18.778 22594.316 - 22719.147: 99.2057% ( 3) 00:12:18.778 22719.147 - 22843.977: 99.2318% ( 3) 00:12:18.778 22843.977 - 22968.808: 99.2580% ( 3) 00:12:18.778 22968.808 - 23093.638: 99.2842% ( 3) 00:12:18.778 23093.638 - 23218.469: 99.3017% ( 2) 00:12:18.778 23218.469 - 23343.299: 99.3279% ( 3) 00:12:18.778 23343.299 - 23468.130: 99.3541% ( 3) 00:12:18.778 23468.130 - 23592.960: 99.3802% ( 3) 00:12:18.778 23592.960 - 23717.790: 99.3977% ( 2) 00:12:18.778 23717.790 - 23842.621: 99.4239% ( 3) 00:12:18.778 23842.621 - 23967.451: 99.4413% ( 2) 00:12:18.778 29335.162 - 29459.992: 99.4588% ( 2) 00:12:18.778 29459.992 - 29584.823: 99.4763% ( 2) 00:12:18.778 29584.823 - 29709.653: 99.5024% ( 3) 00:12:18.778 29709.653 - 29834.484: 99.5286% ( 3) 00:12:18.778 29834.484 - 29959.314: 99.5548% ( 3) 00:12:18.778 29959.314 - 30084.145: 99.5810% ( 3) 00:12:18.778 30084.145 - 30208.975: 99.6072% ( 3) 00:12:18.778 30208.975 - 30333.806: 99.6247% ( 2) 00:12:18.778 30333.806 - 30458.636: 99.6508% ( 3) 00:12:18.778 30458.636 - 30583.467: 99.6770% ( 3) 00:12:18.778 30583.467 - 30708.297: 99.6945% ( 2) 00:12:18.778 30708.297 - 30833.128: 99.7207% ( 3) 00:12:18.778 30833.128 - 30957.958: 99.7469% ( 3) 00:12:18.778 30957.958 - 31082.789: 99.7730% ( 3) 00:12:18.778 31082.789 - 31207.619: 99.7905% ( 2) 00:12:18.778 31207.619 - 31332.450: 99.8167% ( 3) 00:12:18.778 31332.450 - 31457.280: 99.8429% ( 3) 00:12:18.778 31457.280 - 31582.110: 99.8603% ( 2) 00:12:18.778 31582.110 - 31706.941: 99.8865% ( 3) 00:12:18.778 31706.941 - 31831.771: 99.9127% ( 3) 00:12:18.778 31831.771 - 31956.602: 99.9389% ( 3) 00:12:18.778 31956.602 - 32206.263: 99.9825% ( 5) 00:12:18.778 32206.263 - 32455.924: 100.0000% ( 2) 00:12:18.778 00:12:18.778 20:40:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:20.155 Initializing NVMe Controllers 00:12:20.155 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:20.155 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:20.155 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:20.155 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:20.155 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:20.155 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:20.155 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:20.155 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:20.155 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:20.155 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:20.155 Initialization complete. Launching workers. 00:12:20.155 ======================================================== 00:12:20.155 Latency(us) 00:12:20.155 Device Information : IOPS MiB/s Average min max 00:12:20.155 PCIE (0000:00:10.0) NSID 1 from core 0: 10670.51 125.05 12038.12 8586.12 42490.00 00:12:20.155 PCIE (0000:00:11.0) NSID 1 from core 0: 10670.51 125.05 12016.67 8902.87 39784.86 00:12:20.155 PCIE (0000:00:13.0) NSID 1 from core 0: 10670.51 125.05 11995.62 8795.35 38342.07 00:12:20.155 PCIE (0000:00:12.0) NSID 1 from core 0: 10670.51 125.05 11974.81 9034.39 35272.85 00:12:20.155 PCIE (0000:00:12.0) NSID 2 from core 0: 10670.51 125.05 11953.92 8903.70 33806.12 00:12:20.155 PCIE (0000:00:12.0) NSID 3 from core 0: 10670.51 125.05 11934.08 8784.16 31502.75 00:12:20.155 ======================================================== 00:12:20.155 Total : 64023.07 750.27 11985.54 8586.12 42490.00 00:12:20.155 00:12:20.155 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:20.155 ================================================================================= 00:12:20.156 1.00000% : 9299.870us 00:12:20.156 10.00000% : 9924.023us 00:12:20.156 25.00000% : 10485.760us 00:12:20.156 50.00000% : 11421.989us 00:12:20.156 75.00000% : 12732.709us 00:12:20.156 90.00000% : 14792.411us 00:12:20.156 95.00000% : 15541.394us 00:12:20.156 98.00000% : 16352.792us 00:12:20.156 99.00000% : 30458.636us 00:12:20.156 99.50000% : 40445.074us 00:12:20.156 99.90000% : 42192.701us 00:12:20.156 99.99000% : 42442.362us 00:12:20.156 99.99900% : 42692.023us 00:12:20.156 99.99990% : 42692.023us 00:12:20.156 99.99999% : 42692.023us 00:12:20.156 00:12:20.156 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:20.156 ================================================================================= 00:12:20.156 1.00000% : 9424.701us 00:12:20.156 10.00000% : 9986.438us 00:12:20.156 25.00000% : 10423.345us 00:12:20.156 50.00000% : 11421.989us 00:12:20.156 75.00000% : 12607.878us 00:12:20.156 90.00000% : 14729.996us 00:12:20.156 95.00000% : 15416.564us 00:12:20.156 98.00000% : 16352.792us 00:12:20.156 99.00000% : 29459.992us 00:12:20.156 99.50000% : 37948.465us 00:12:20.156 99.90000% : 39446.430us 00:12:20.156 99.99000% : 39945.752us 00:12:20.156 99.99900% : 39945.752us 00:12:20.156 99.99990% : 39945.752us 00:12:20.156 99.99999% : 39945.752us 00:12:20.156 00:12:20.156 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:20.156 ================================================================================= 00:12:20.156 1.00000% : 9487.116us 00:12:20.156 10.00000% : 9986.438us 00:12:20.156 25.00000% : 10423.345us 00:12:20.156 50.00000% : 11421.989us 00:12:20.156 75.00000% : 12607.878us 00:12:20.156 90.00000% : 14729.996us 00:12:20.156 95.00000% : 15416.564us 00:12:20.156 98.00000% : 16352.792us 00:12:20.156 99.00000% : 28086.857us 00:12:20.156 99.50000% : 36450.499us 00:12:20.156 99.90000% : 38198.126us 00:12:20.156 99.99000% : 38447.787us 00:12:20.156 99.99900% : 38447.787us 00:12:20.156 99.99990% : 38447.787us 00:12:20.156 99.99999% : 38447.787us 00:12:20.156 00:12:20.156 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:20.156 ================================================================================= 00:12:20.156 1.00000% : 9487.116us 00:12:20.156 10.00000% : 9986.438us 00:12:20.156 25.00000% : 10423.345us 00:12:20.156 50.00000% : 11421.989us 00:12:20.156 75.00000% : 12670.293us 00:12:20.156 90.00000% : 14667.581us 00:12:20.156 95.00000% : 15478.979us 00:12:20.156 98.00000% : 16477.623us 00:12:20.156 99.00000% : 27712.366us 00:12:20.156 99.50000% : 32955.246us 00:12:20.156 99.90000% : 33704.229us 00:12:20.156 99.99000% : 35451.855us 00:12:20.156 99.99900% : 35451.855us 00:12:20.156 99.99990% : 35451.855us 00:12:20.156 99.99999% : 35451.855us 00:12:20.156 00:12:20.156 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:20.156 ================================================================================= 00:12:20.156 1.00000% : 9424.701us 00:12:20.156 10.00000% : 9986.438us 00:12:20.156 25.00000% : 10423.345us 00:12:20.156 50.00000% : 11421.989us 00:12:20.156 75.00000% : 12670.293us 00:12:20.156 90.00000% : 14667.581us 00:12:20.156 95.00000% : 15354.149us 00:12:20.156 98.00000% : 16602.453us 00:12:20.156 99.00000% : 24716.434us 00:12:20.156 99.50000% : 30708.297us 00:12:20.156 99.90000% : 33454.568us 00:12:20.156 99.99000% : 33953.890us 00:12:20.156 99.99900% : 33953.890us 00:12:20.156 99.99990% : 33953.890us 00:12:20.156 99.99999% : 33953.890us 00:12:20.156 00:12:20.156 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:20.156 ================================================================================= 00:12:20.156 1.00000% : 9424.701us 00:12:20.156 10.00000% : 9986.438us 00:12:20.156 25.00000% : 10423.345us 00:12:20.156 50.00000% : 11359.573us 00:12:20.156 75.00000% : 12670.293us 00:12:20.156 90.00000% : 14729.996us 00:12:20.156 95.00000% : 15416.564us 00:12:20.156 98.00000% : 16602.453us 00:12:20.156 99.00000% : 22344.655us 00:12:20.156 99.50000% : 29709.653us 00:12:20.156 99.90000% : 31207.619us 00:12:20.156 99.99000% : 31582.110us 00:12:20.156 99.99900% : 31582.110us 00:12:20.156 99.99990% : 31582.110us 00:12:20.156 99.99999% : 31582.110us 00:12:20.156 00:12:20.156 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:20.156 ============================================================================== 00:12:20.156 Range in us Cumulative IO count 00:12:20.156 8550.888 - 8613.303: 0.0094% ( 1) 00:12:20.156 8675.718 - 8738.133: 0.0187% ( 1) 00:12:20.156 8738.133 - 8800.549: 0.0281% ( 1) 00:12:20.156 8925.379 - 8987.794: 0.0374% ( 1) 00:12:20.156 8987.794 - 9050.210: 0.1403% ( 11) 00:12:20.156 9050.210 - 9112.625: 0.3181% ( 19) 00:12:20.156 9112.625 - 9175.040: 0.3743% ( 6) 00:12:20.156 9175.040 - 9237.455: 0.6830% ( 33) 00:12:20.156 9237.455 - 9299.870: 1.0573% ( 40) 00:12:20.156 9299.870 - 9362.286: 1.5625% ( 54) 00:12:20.156 9362.286 - 9424.701: 2.3016% ( 79) 00:12:20.156 9424.701 - 9487.116: 3.1156% ( 87) 00:12:20.156 9487.116 - 9549.531: 3.5647% ( 48) 00:12:20.156 9549.531 - 9611.947: 4.3132% ( 80) 00:12:20.156 9611.947 - 9674.362: 5.2957% ( 105) 00:12:20.156 9674.362 - 9736.777: 6.6336% ( 143) 00:12:20.156 9736.777 - 9799.192: 7.8780% ( 133) 00:12:20.156 9799.192 - 9861.608: 9.3282% ( 155) 00:12:20.156 9861.608 - 9924.023: 10.8159% ( 159) 00:12:20.156 9924.023 - 9986.438: 12.9678% ( 230) 00:12:20.156 9986.438 - 10048.853: 14.5490% ( 169) 00:12:20.156 10048.853 - 10111.269: 16.6448% ( 224) 00:12:20.156 10111.269 - 10173.684: 18.1606% ( 162) 00:12:20.156 10173.684 - 10236.099: 19.9008% ( 186) 00:12:20.156 10236.099 - 10298.514: 21.2013% ( 139) 00:12:20.156 10298.514 - 10360.930: 22.7732% ( 168) 00:12:20.156 10360.930 - 10423.345: 24.3825% ( 172) 00:12:20.156 10423.345 - 10485.760: 25.8046% ( 152) 00:12:20.156 10485.760 - 10548.175: 27.3110% ( 161) 00:12:20.156 10548.175 - 10610.590: 28.9671% ( 177) 00:12:20.156 10610.590 - 10673.006: 30.5951% ( 174) 00:12:20.156 10673.006 - 10735.421: 32.1388% ( 165) 00:12:20.156 10735.421 - 10797.836: 33.5610% ( 152) 00:12:20.156 10797.836 - 10860.251: 35.0767% ( 162) 00:12:20.156 10860.251 - 10922.667: 36.5924% ( 162) 00:12:20.156 10922.667 - 10985.082: 38.3234% ( 185) 00:12:20.156 10985.082 - 11047.497: 40.4379% ( 226) 00:12:20.156 11047.497 - 11109.912: 42.3653% ( 206) 00:12:20.156 11109.912 - 11172.328: 44.0962% ( 185) 00:12:20.156 11172.328 - 11234.743: 46.2668% ( 232) 00:12:20.156 11234.743 - 11297.158: 48.0539% ( 191) 00:12:20.156 11297.158 - 11359.573: 49.7754% ( 184) 00:12:20.156 11359.573 - 11421.989: 52.0303% ( 241) 00:12:20.156 11421.989 - 11484.404: 53.4618% ( 153) 00:12:20.156 11484.404 - 11546.819: 54.8185% ( 145) 00:12:20.156 11546.819 - 11609.234: 56.4465% ( 174) 00:12:20.156 11609.234 - 11671.650: 58.1025% ( 177) 00:12:20.156 11671.650 - 11734.065: 59.8241% ( 184) 00:12:20.156 11734.065 - 11796.480: 61.3585% ( 164) 00:12:20.156 11796.480 - 11858.895: 62.9959% ( 175) 00:12:20.156 11858.895 - 11921.310: 64.6145% ( 173) 00:12:20.156 11921.310 - 11983.726: 66.1583% ( 165) 00:12:20.156 11983.726 - 12046.141: 67.6460% ( 159) 00:12:20.156 12046.141 - 12108.556: 68.6939% ( 112) 00:12:20.156 12108.556 - 12170.971: 69.6669% ( 104) 00:12:20.156 12170.971 - 12233.387: 70.4903% ( 88) 00:12:20.156 12233.387 - 12295.802: 71.2575% ( 82) 00:12:20.156 12295.802 - 12358.217: 72.0153% ( 81) 00:12:20.156 12358.217 - 12420.632: 72.8200% ( 86) 00:12:20.156 12420.632 - 12483.048: 73.3626% ( 58) 00:12:20.156 12483.048 - 12545.463: 73.8960% ( 57) 00:12:20.156 12545.463 - 12607.878: 74.4760% ( 62) 00:12:20.156 12607.878 - 12670.293: 74.9626% ( 52) 00:12:20.156 12670.293 - 12732.709: 75.3930% ( 46) 00:12:20.156 12732.709 - 12795.124: 75.8046% ( 44) 00:12:20.156 12795.124 - 12857.539: 76.2818% ( 51) 00:12:20.156 12857.539 - 12919.954: 76.7964% ( 55) 00:12:20.156 12919.954 - 12982.370: 77.1426% ( 37) 00:12:20.156 12982.370 - 13044.785: 77.5075% ( 39) 00:12:20.156 13044.785 - 13107.200: 78.2653% ( 81) 00:12:20.156 13107.200 - 13169.615: 78.8922% ( 67) 00:12:20.156 13169.615 - 13232.030: 79.2852% ( 42) 00:12:20.156 13232.030 - 13294.446: 79.6407% ( 38) 00:12:20.156 13294.446 - 13356.861: 80.0430% ( 43) 00:12:20.156 13356.861 - 13419.276: 80.3799% ( 36) 00:12:20.156 13419.276 - 13481.691: 80.8196% ( 47) 00:12:20.156 13481.691 - 13544.107: 81.2968% ( 51) 00:12:20.156 13544.107 - 13606.522: 81.7272% ( 46) 00:12:20.156 13606.522 - 13668.937: 82.1576% ( 46) 00:12:20.156 13668.937 - 13731.352: 82.5599% ( 43) 00:12:20.156 13731.352 - 13793.768: 82.9248% ( 39) 00:12:20.156 13793.768 - 13856.183: 83.2616% ( 36) 00:12:20.156 13856.183 - 13918.598: 83.7668% ( 54) 00:12:20.156 13918.598 - 13981.013: 84.0195% ( 27) 00:12:20.156 13981.013 - 14043.429: 84.4031% ( 41) 00:12:20.156 14043.429 - 14105.844: 84.7773% ( 40) 00:12:20.156 14105.844 - 14168.259: 85.2264% ( 48) 00:12:20.156 14168.259 - 14230.674: 85.6662% ( 47) 00:12:20.156 14230.674 - 14293.090: 86.2463% ( 62) 00:12:20.156 14293.090 - 14355.505: 86.8357% ( 63) 00:12:20.156 14355.505 - 14417.920: 87.3784% ( 58) 00:12:20.156 14417.920 - 14480.335: 87.9397% ( 60) 00:12:20.156 14480.335 - 14542.750: 88.5105% ( 61) 00:12:20.156 14542.750 - 14605.166: 89.0438% ( 57) 00:12:20.156 14605.166 - 14667.581: 89.5116% ( 50) 00:12:20.156 14667.581 - 14729.996: 89.9794% ( 50) 00:12:20.156 14729.996 - 14792.411: 90.4098% ( 46) 00:12:20.156 14792.411 - 14854.827: 90.9805% ( 61) 00:12:20.156 14854.827 - 14917.242: 91.4296% ( 48) 00:12:20.156 14917.242 - 14979.657: 91.8132% ( 41) 00:12:20.157 14979.657 - 15042.072: 92.1688% ( 38) 00:12:20.157 15042.072 - 15104.488: 92.6927% ( 56) 00:12:20.157 15104.488 - 15166.903: 93.0763% ( 41) 00:12:20.157 15166.903 - 15229.318: 93.5254% ( 48) 00:12:20.157 15229.318 - 15291.733: 93.8903% ( 39) 00:12:20.157 15291.733 - 15354.149: 94.2927% ( 43) 00:12:20.157 15354.149 - 15416.564: 94.6388% ( 37) 00:12:20.157 15416.564 - 15478.979: 94.9663% ( 35) 00:12:20.157 15478.979 - 15541.394: 95.3219% ( 38) 00:12:20.157 15541.394 - 15603.810: 95.6868% ( 39) 00:12:20.157 15603.810 - 15666.225: 95.9955% ( 33) 00:12:20.157 15666.225 - 15728.640: 96.3230% ( 35) 00:12:20.157 15728.640 - 15791.055: 96.5850% ( 28) 00:12:20.157 15791.055 - 15853.470: 96.8844% ( 32) 00:12:20.157 15853.470 - 15915.886: 97.0902% ( 22) 00:12:20.157 15915.886 - 15978.301: 97.2399% ( 16) 00:12:20.157 15978.301 - 16103.131: 97.5861% ( 37) 00:12:20.157 16103.131 - 16227.962: 97.8668% ( 30) 00:12:20.157 16227.962 - 16352.792: 98.1381% ( 29) 00:12:20.157 16352.792 - 16477.623: 98.3907% ( 27) 00:12:20.157 16477.623 - 16602.453: 98.5124% ( 13) 00:12:20.157 16602.453 - 16727.284: 98.6340% ( 13) 00:12:20.157 16727.284 - 16852.114: 98.7275% ( 10) 00:12:20.157 16852.114 - 16976.945: 98.8024% ( 8) 00:12:20.157 29459.992 - 29584.823: 98.8118% ( 1) 00:12:20.157 29584.823 - 29709.653: 98.8305% ( 2) 00:12:20.157 29709.653 - 29834.484: 98.8585% ( 3) 00:12:20.157 29834.484 - 29959.314: 98.8960% ( 4) 00:12:20.157 29959.314 - 30084.145: 98.9334% ( 4) 00:12:20.157 30084.145 - 30208.975: 98.9521% ( 2) 00:12:20.157 30208.975 - 30333.806: 98.9708% ( 2) 00:12:20.157 30333.806 - 30458.636: 99.0082% ( 4) 00:12:20.157 30458.636 - 30583.467: 99.0363% ( 3) 00:12:20.157 30583.467 - 30708.297: 99.0644% ( 3) 00:12:20.157 30708.297 - 30833.128: 99.0924% ( 3) 00:12:20.157 30833.128 - 30957.958: 99.1205% ( 3) 00:12:20.157 30957.958 - 31082.789: 99.1579% ( 4) 00:12:20.157 31082.789 - 31207.619: 99.1766% ( 2) 00:12:20.157 31207.619 - 31332.450: 99.2047% ( 3) 00:12:20.157 31332.450 - 31457.280: 99.2421% ( 4) 00:12:20.157 31457.280 - 31582.110: 99.2702% ( 3) 00:12:20.157 31582.110 - 31706.941: 99.2983% ( 3) 00:12:20.157 31706.941 - 31831.771: 99.3263% ( 3) 00:12:20.157 31831.771 - 31956.602: 99.3544% ( 3) 00:12:20.157 31956.602 - 32206.263: 99.4012% ( 5) 00:12:20.157 39945.752 - 40195.413: 99.4667% ( 7) 00:12:20.157 40195.413 - 40445.074: 99.5228% ( 6) 00:12:20.157 40445.074 - 40694.735: 99.5790% ( 6) 00:12:20.157 40694.735 - 40944.396: 99.6445% ( 7) 00:12:20.157 40944.396 - 41194.057: 99.7006% ( 6) 00:12:20.157 41194.057 - 41443.718: 99.7474% ( 5) 00:12:20.157 41443.718 - 41693.379: 99.8222% ( 8) 00:12:20.157 41693.379 - 41943.040: 99.8784% ( 6) 00:12:20.157 41943.040 - 42192.701: 99.9345% ( 6) 00:12:20.157 42192.701 - 42442.362: 99.9906% ( 6) 00:12:20.157 42442.362 - 42692.023: 100.0000% ( 1) 00:12:20.157 00:12:20.157 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:20.157 ============================================================================== 00:12:20.157 Range in us Cumulative IO count 00:12:20.157 8862.964 - 8925.379: 0.0094% ( 1) 00:12:20.157 8987.794 - 9050.210: 0.0281% ( 2) 00:12:20.157 9050.210 - 9112.625: 0.1123% ( 9) 00:12:20.157 9112.625 - 9175.040: 0.1871% ( 8) 00:12:20.157 9175.040 - 9237.455: 0.4210% ( 25) 00:12:20.157 9237.455 - 9299.870: 0.6643% ( 26) 00:12:20.157 9299.870 - 9362.286: 0.8982% ( 25) 00:12:20.157 9362.286 - 9424.701: 1.2631% ( 39) 00:12:20.157 9424.701 - 9487.116: 1.8151% ( 59) 00:12:20.157 9487.116 - 9549.531: 2.1707% ( 38) 00:12:20.157 9549.531 - 9611.947: 2.8162% ( 69) 00:12:20.157 9611.947 - 9674.362: 3.6022% ( 84) 00:12:20.157 9674.362 - 9736.777: 4.7530% ( 123) 00:12:20.157 9736.777 - 9799.192: 5.7073% ( 102) 00:12:20.157 9799.192 - 9861.608: 7.0453% ( 143) 00:12:20.157 9861.608 - 9924.023: 8.9259% ( 201) 00:12:20.157 9924.023 - 9986.438: 10.7223% ( 192) 00:12:20.157 9986.438 - 10048.853: 12.5655% ( 197) 00:12:20.157 10048.853 - 10111.269: 14.8859% ( 248) 00:12:20.157 10111.269 - 10173.684: 17.4963% ( 279) 00:12:20.157 10173.684 - 10236.099: 19.6295% ( 228) 00:12:20.157 10236.099 - 10298.514: 21.5195% ( 202) 00:12:20.157 10298.514 - 10360.930: 23.4843% ( 210) 00:12:20.157 10360.930 - 10423.345: 25.1497% ( 178) 00:12:20.157 10423.345 - 10485.760: 26.9929% ( 197) 00:12:20.157 10485.760 - 10548.175: 28.3870% ( 149) 00:12:20.157 10548.175 - 10610.590: 29.6407% ( 134) 00:12:20.157 10610.590 - 10673.006: 30.9974% ( 145) 00:12:20.157 10673.006 - 10735.421: 32.4476% ( 155) 00:12:20.157 10735.421 - 10797.836: 34.0195% ( 168) 00:12:20.157 10797.836 - 10860.251: 35.5632% ( 165) 00:12:20.157 10860.251 - 10922.667: 36.9293% ( 146) 00:12:20.157 10922.667 - 10985.082: 38.4356% ( 161) 00:12:20.157 10985.082 - 11047.497: 39.6707% ( 132) 00:12:20.157 11047.497 - 11109.912: 41.3267% ( 177) 00:12:20.157 11109.912 - 11172.328: 42.8612% ( 164) 00:12:20.157 11172.328 - 11234.743: 44.5827% ( 184) 00:12:20.157 11234.743 - 11297.158: 46.4540% ( 200) 00:12:20.157 11297.158 - 11359.573: 48.6527% ( 235) 00:12:20.157 11359.573 - 11421.989: 50.7859% ( 228) 00:12:20.157 11421.989 - 11484.404: 52.7507% ( 210) 00:12:20.157 11484.404 - 11546.819: 54.8466% ( 224) 00:12:20.157 11546.819 - 11609.234: 56.7085% ( 199) 00:12:20.157 11609.234 - 11671.650: 58.4674% ( 188) 00:12:20.157 11671.650 - 11734.065: 60.2451% ( 190) 00:12:20.157 11734.065 - 11796.480: 61.9012% ( 177) 00:12:20.157 11796.480 - 11858.895: 63.7444% ( 197) 00:12:20.157 11858.895 - 11921.310: 65.3630% ( 173) 00:12:20.157 11921.310 - 11983.726: 66.7758% ( 151) 00:12:20.157 11983.726 - 12046.141: 68.2635% ( 159) 00:12:20.157 12046.141 - 12108.556: 69.6388% ( 147) 00:12:20.157 12108.556 - 12170.971: 70.8458% ( 129) 00:12:20.157 12170.971 - 12233.387: 71.8563% ( 108) 00:12:20.157 12233.387 - 12295.802: 72.6329% ( 83) 00:12:20.157 12295.802 - 12358.217: 73.2691% ( 68) 00:12:20.157 12358.217 - 12420.632: 73.8492% ( 62) 00:12:20.157 12420.632 - 12483.048: 74.3638% ( 55) 00:12:20.157 12483.048 - 12545.463: 74.8129% ( 48) 00:12:20.157 12545.463 - 12607.878: 75.1591% ( 37) 00:12:20.157 12607.878 - 12670.293: 75.4865% ( 35) 00:12:20.157 12670.293 - 12732.709: 75.8234% ( 36) 00:12:20.157 12732.709 - 12795.124: 76.1040% ( 30) 00:12:20.157 12795.124 - 12857.539: 76.3379% ( 25) 00:12:20.157 12857.539 - 12919.954: 76.5344% ( 21) 00:12:20.157 12919.954 - 12982.370: 76.9180% ( 41) 00:12:20.157 12982.370 - 13044.785: 77.3765% ( 49) 00:12:20.157 13044.785 - 13107.200: 77.7601% ( 41) 00:12:20.157 13107.200 - 13169.615: 78.2560% ( 53) 00:12:20.157 13169.615 - 13232.030: 78.6770% ( 45) 00:12:20.157 13232.030 - 13294.446: 79.1916% ( 55) 00:12:20.157 13294.446 - 13356.861: 79.5846% ( 42) 00:12:20.157 13356.861 - 13419.276: 80.0243% ( 47) 00:12:20.157 13419.276 - 13481.691: 80.4828% ( 49) 00:12:20.157 13481.691 - 13544.107: 80.8290% ( 37) 00:12:20.157 13544.107 - 13606.522: 81.2687% ( 47) 00:12:20.157 13606.522 - 13668.937: 81.7740% ( 54) 00:12:20.157 13668.937 - 13731.352: 82.4850% ( 76) 00:12:20.157 13731.352 - 13793.768: 83.1493% ( 71) 00:12:20.157 13793.768 - 13856.183: 83.6265% ( 51) 00:12:20.157 13856.183 - 13918.598: 84.0382% ( 44) 00:12:20.157 13918.598 - 13981.013: 84.5528% ( 55) 00:12:20.157 13981.013 - 14043.429: 85.1235% ( 61) 00:12:20.157 14043.429 - 14105.844: 85.4884% ( 39) 00:12:20.157 14105.844 - 14168.259: 85.8439% ( 38) 00:12:20.157 14168.259 - 14230.674: 86.2182% ( 40) 00:12:20.157 14230.674 - 14293.090: 86.5737% ( 38) 00:12:20.157 14293.090 - 14355.505: 86.9573% ( 41) 00:12:20.157 14355.505 - 14417.920: 87.5000% ( 58) 00:12:20.157 14417.920 - 14480.335: 87.9678% ( 50) 00:12:20.157 14480.335 - 14542.750: 88.5853% ( 66) 00:12:20.157 14542.750 - 14605.166: 89.2403% ( 70) 00:12:20.157 14605.166 - 14667.581: 89.8952% ( 70) 00:12:20.157 14667.581 - 14729.996: 90.5314% ( 68) 00:12:20.157 14729.996 - 14792.411: 91.1957% ( 71) 00:12:20.157 14792.411 - 14854.827: 91.7103% ( 55) 00:12:20.157 14854.827 - 14917.242: 92.1501% ( 47) 00:12:20.157 14917.242 - 14979.657: 92.6272% ( 51) 00:12:20.157 14979.657 - 15042.072: 93.0951% ( 50) 00:12:20.157 15042.072 - 15104.488: 93.4787% ( 41) 00:12:20.157 15104.488 - 15166.903: 93.8623% ( 41) 00:12:20.157 15166.903 - 15229.318: 94.2365% ( 40) 00:12:20.157 15229.318 - 15291.733: 94.5734% ( 36) 00:12:20.157 15291.733 - 15354.149: 94.8728% ( 32) 00:12:20.157 15354.149 - 15416.564: 95.1815% ( 33) 00:12:20.157 15416.564 - 15478.979: 95.4154% ( 25) 00:12:20.157 15478.979 - 15541.394: 95.7055% ( 31) 00:12:20.157 15541.394 - 15603.810: 95.9487% ( 26) 00:12:20.157 15603.810 - 15666.225: 96.0984% ( 16) 00:12:20.157 15666.225 - 15728.640: 96.3604% ( 28) 00:12:20.157 15728.640 - 15791.055: 96.5756% ( 23) 00:12:20.157 15791.055 - 15853.470: 96.7814% ( 22) 00:12:20.157 15853.470 - 15915.886: 97.0434% ( 28) 00:12:20.157 15915.886 - 15978.301: 97.2867% ( 26) 00:12:20.157 15978.301 - 16103.131: 97.6329% ( 37) 00:12:20.157 16103.131 - 16227.962: 97.9978% ( 39) 00:12:20.157 16227.962 - 16352.792: 98.2129% ( 23) 00:12:20.157 16352.792 - 16477.623: 98.4656% ( 27) 00:12:20.157 16477.623 - 16602.453: 98.6340% ( 18) 00:12:20.157 16602.453 - 16727.284: 98.7275% ( 10) 00:12:20.157 16727.284 - 16852.114: 98.7743% ( 5) 00:12:20.157 16852.114 - 16976.945: 98.8024% ( 3) 00:12:20.157 28586.179 - 28711.010: 98.8398% ( 4) 00:12:20.157 28711.010 - 28835.840: 98.8679% ( 3) 00:12:20.157 28835.840 - 28960.670: 98.9053% ( 4) 00:12:20.157 28960.670 - 29085.501: 98.9334% ( 3) 00:12:20.157 29085.501 - 29210.331: 98.9708% ( 4) 00:12:20.158 29210.331 - 29335.162: 98.9989% ( 3) 00:12:20.158 29335.162 - 29459.992: 99.0363% ( 4) 00:12:20.158 29459.992 - 29584.823: 99.0550% ( 2) 00:12:20.158 29584.823 - 29709.653: 99.0924% ( 4) 00:12:20.158 29709.653 - 29834.484: 99.1205% ( 3) 00:12:20.158 29834.484 - 29959.314: 99.1486% ( 3) 00:12:20.158 29959.314 - 30084.145: 99.1766% ( 3) 00:12:20.158 30084.145 - 30208.975: 99.2141% ( 4) 00:12:20.158 30208.975 - 30333.806: 99.2421% ( 3) 00:12:20.158 30333.806 - 30458.636: 99.2796% ( 4) 00:12:20.158 30458.636 - 30583.467: 99.3076% ( 3) 00:12:20.158 30583.467 - 30708.297: 99.3451% ( 4) 00:12:20.158 30708.297 - 30833.128: 99.3731% ( 3) 00:12:20.158 30833.128 - 30957.958: 99.4012% ( 3) 00:12:20.158 37449.143 - 37698.804: 99.4480% ( 5) 00:12:20.158 37698.804 - 37948.465: 99.5135% ( 7) 00:12:20.158 37948.465 - 38198.126: 99.5696% ( 6) 00:12:20.158 38198.126 - 38447.787: 99.6351% ( 7) 00:12:20.158 38447.787 - 38697.448: 99.7006% ( 7) 00:12:20.158 38697.448 - 38947.109: 99.7661% ( 7) 00:12:20.158 38947.109 - 39196.770: 99.8409% ( 8) 00:12:20.158 39196.770 - 39446.430: 99.9064% ( 7) 00:12:20.158 39446.430 - 39696.091: 99.9719% ( 7) 00:12:20.158 39696.091 - 39945.752: 100.0000% ( 3) 00:12:20.158 00:12:20.158 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:20.158 ============================================================================== 00:12:20.158 Range in us Cumulative IO count 00:12:20.158 8738.133 - 8800.549: 0.0094% ( 1) 00:12:20.158 8862.964 - 8925.379: 0.0187% ( 1) 00:12:20.158 8925.379 - 8987.794: 0.0281% ( 1) 00:12:20.158 9050.210 - 9112.625: 0.0561% ( 3) 00:12:20.158 9112.625 - 9175.040: 0.1497% ( 10) 00:12:20.158 9175.040 - 9237.455: 0.2994% ( 16) 00:12:20.158 9237.455 - 9299.870: 0.5240% ( 24) 00:12:20.158 9299.870 - 9362.286: 0.6549% ( 14) 00:12:20.158 9362.286 - 9424.701: 0.8795% ( 24) 00:12:20.158 9424.701 - 9487.116: 1.2631% ( 41) 00:12:20.158 9487.116 - 9549.531: 1.7871% ( 56) 00:12:20.158 9549.531 - 9611.947: 2.7882% ( 107) 00:12:20.158 9611.947 - 9674.362: 3.6396% ( 91) 00:12:20.158 9674.362 - 9736.777: 4.7436% ( 118) 00:12:20.158 9736.777 - 9799.192: 5.9319% ( 127) 00:12:20.158 9799.192 - 9861.608: 7.6722% ( 186) 00:12:20.158 9861.608 - 9924.023: 9.2908% ( 173) 00:12:20.158 9924.023 - 9986.438: 11.1246% ( 196) 00:12:20.158 9986.438 - 10048.853: 13.4356% ( 247) 00:12:20.158 10048.853 - 10111.269: 15.3911% ( 209) 00:12:20.158 10111.269 - 10173.684: 17.1688% ( 190) 00:12:20.158 10173.684 - 10236.099: 19.2365% ( 221) 00:12:20.158 10236.099 - 10298.514: 21.0423% ( 193) 00:12:20.158 10298.514 - 10360.930: 22.9510% ( 204) 00:12:20.158 10360.930 - 10423.345: 25.0187% ( 221) 00:12:20.158 10423.345 - 10485.760: 26.4970% ( 158) 00:12:20.158 10485.760 - 10548.175: 28.6022% ( 225) 00:12:20.158 10548.175 - 10610.590: 30.0056% ( 150) 00:12:20.158 10610.590 - 10673.006: 31.0816% ( 115) 00:12:20.158 10673.006 - 10735.421: 32.0640% ( 105) 00:12:20.158 10735.421 - 10797.836: 33.0558% ( 106) 00:12:20.158 10797.836 - 10860.251: 34.4124% ( 145) 00:12:20.158 10860.251 - 10922.667: 35.9656% ( 166) 00:12:20.158 10922.667 - 10985.082: 37.9397% ( 211) 00:12:20.158 10985.082 - 11047.497: 39.6145% ( 179) 00:12:20.158 11047.497 - 11109.912: 41.4016% ( 191) 00:12:20.158 11109.912 - 11172.328: 43.4412% ( 218) 00:12:20.158 11172.328 - 11234.743: 45.4903% ( 219) 00:12:20.158 11234.743 - 11297.158: 47.5861% ( 224) 00:12:20.158 11297.158 - 11359.573: 49.3076% ( 184) 00:12:20.158 11359.573 - 11421.989: 51.3754% ( 221) 00:12:20.158 11421.989 - 11484.404: 53.5086% ( 228) 00:12:20.158 11484.404 - 11546.819: 55.3424% ( 196) 00:12:20.158 11546.819 - 11609.234: 57.1856% ( 197) 00:12:20.158 11609.234 - 11671.650: 59.2347% ( 219) 00:12:20.158 11671.650 - 11734.065: 61.1808% ( 208) 00:12:20.158 11734.065 - 11796.480: 62.7058% ( 163) 00:12:20.158 11796.480 - 11858.895: 64.0999% ( 149) 00:12:20.158 11858.895 - 11921.310: 65.8028% ( 182) 00:12:20.158 11921.310 - 11983.726: 67.1594% ( 145) 00:12:20.158 11983.726 - 12046.141: 68.6284% ( 157) 00:12:20.158 12046.141 - 12108.556: 69.7979% ( 125) 00:12:20.158 12108.556 - 12170.971: 70.8177% ( 109) 00:12:20.158 12170.971 - 12233.387: 71.9311% ( 119) 00:12:20.158 12233.387 - 12295.802: 72.6796% ( 80) 00:12:20.158 12295.802 - 12358.217: 73.2878% ( 65) 00:12:20.158 12358.217 - 12420.632: 73.8866% ( 64) 00:12:20.158 12420.632 - 12483.048: 74.4012% ( 55) 00:12:20.158 12483.048 - 12545.463: 74.9532% ( 59) 00:12:20.158 12545.463 - 12607.878: 75.4772% ( 56) 00:12:20.158 12607.878 - 12670.293: 75.9543% ( 51) 00:12:20.158 12670.293 - 12732.709: 76.3379% ( 41) 00:12:20.158 12732.709 - 12795.124: 76.6374% ( 32) 00:12:20.158 12795.124 - 12857.539: 76.8806% ( 26) 00:12:20.158 12857.539 - 12919.954: 77.2549% ( 40) 00:12:20.158 12919.954 - 12982.370: 77.4794% ( 24) 00:12:20.158 12982.370 - 13044.785: 77.7414% ( 28) 00:12:20.158 13044.785 - 13107.200: 77.9847% ( 26) 00:12:20.158 13107.200 - 13169.615: 78.3028% ( 34) 00:12:20.158 13169.615 - 13232.030: 78.6583% ( 38) 00:12:20.158 13232.030 - 13294.446: 79.0326% ( 40) 00:12:20.158 13294.446 - 13356.861: 79.8746% ( 90) 00:12:20.158 13356.861 - 13419.276: 80.3705% ( 53) 00:12:20.158 13419.276 - 13481.691: 80.7635% ( 42) 00:12:20.158 13481.691 - 13544.107: 81.1190% ( 38) 00:12:20.158 13544.107 - 13606.522: 81.5213% ( 43) 00:12:20.158 13606.522 - 13668.937: 82.0546% ( 57) 00:12:20.158 13668.937 - 13731.352: 82.7751% ( 77) 00:12:20.158 13731.352 - 13793.768: 83.3739% ( 64) 00:12:20.158 13793.768 - 13856.183: 83.7762% ( 43) 00:12:20.158 13856.183 - 13918.598: 84.1598% ( 41) 00:12:20.158 13918.598 - 13981.013: 84.4592% ( 32) 00:12:20.158 13981.013 - 14043.429: 84.7960% ( 36) 00:12:20.158 14043.429 - 14105.844: 85.1235% ( 35) 00:12:20.158 14105.844 - 14168.259: 85.4697% ( 37) 00:12:20.158 14168.259 - 14230.674: 85.9188% ( 48) 00:12:20.158 14230.674 - 14293.090: 86.3305% ( 44) 00:12:20.158 14293.090 - 14355.505: 86.9199% ( 63) 00:12:20.158 14355.505 - 14417.920: 87.3784% ( 49) 00:12:20.158 14417.920 - 14480.335: 87.8649% ( 52) 00:12:20.158 14480.335 - 14542.750: 88.3421% ( 51) 00:12:20.158 14542.750 - 14605.166: 88.8286% ( 52) 00:12:20.158 14605.166 - 14667.581: 89.5116% ( 73) 00:12:20.158 14667.581 - 14729.996: 90.1665% ( 70) 00:12:20.158 14729.996 - 14792.411: 90.9244% ( 81) 00:12:20.158 14792.411 - 14854.827: 91.5700% ( 69) 00:12:20.158 14854.827 - 14917.242: 92.1688% ( 64) 00:12:20.158 14917.242 - 14979.657: 92.6460% ( 51) 00:12:20.158 14979.657 - 15042.072: 93.0483% ( 43) 00:12:20.158 15042.072 - 15104.488: 93.4787% ( 46) 00:12:20.158 15104.488 - 15166.903: 93.8623% ( 41) 00:12:20.158 15166.903 - 15229.318: 94.2459% ( 41) 00:12:20.158 15229.318 - 15291.733: 94.5921% ( 37) 00:12:20.158 15291.733 - 15354.149: 94.8821% ( 31) 00:12:20.158 15354.149 - 15416.564: 95.1815% ( 32) 00:12:20.158 15416.564 - 15478.979: 95.3686% ( 20) 00:12:20.158 15478.979 - 15541.394: 95.5838% ( 23) 00:12:20.158 15541.394 - 15603.810: 95.7990% ( 23) 00:12:20.158 15603.810 - 15666.225: 95.9674% ( 18) 00:12:20.158 15666.225 - 15728.640: 96.1171% ( 16) 00:12:20.158 15728.640 - 15791.055: 96.3791% ( 28) 00:12:20.158 15791.055 - 15853.470: 96.6785% ( 32) 00:12:20.158 15853.470 - 15915.886: 96.9311% ( 27) 00:12:20.158 15915.886 - 15978.301: 97.1931% ( 28) 00:12:20.158 15978.301 - 16103.131: 97.6609% ( 50) 00:12:20.158 16103.131 - 16227.962: 97.9978% ( 36) 00:12:20.158 16227.962 - 16352.792: 98.2504% ( 27) 00:12:20.158 16352.792 - 16477.623: 98.4001% ( 16) 00:12:20.158 16477.623 - 16602.453: 98.5498% ( 16) 00:12:20.158 16602.453 - 16727.284: 98.6901% ( 15) 00:12:20.158 16727.284 - 16852.114: 98.7650% ( 8) 00:12:20.158 16852.114 - 16976.945: 98.8024% ( 4) 00:12:20.158 27587.535 - 27712.366: 98.8866% ( 9) 00:12:20.158 27712.366 - 27837.196: 98.9427% ( 6) 00:12:20.158 27837.196 - 27962.027: 98.9895% ( 5) 00:12:20.158 27962.027 - 28086.857: 99.0269% ( 4) 00:12:20.158 28086.857 - 28211.688: 99.0550% ( 3) 00:12:20.158 28211.688 - 28336.518: 99.0831% ( 3) 00:12:20.158 28336.518 - 28461.349: 99.1205% ( 4) 00:12:20.158 28461.349 - 28586.179: 99.1579% ( 4) 00:12:20.158 28586.179 - 28711.010: 99.1766% ( 2) 00:12:20.158 28711.010 - 28835.840: 99.1954% ( 2) 00:12:20.158 28835.840 - 28960.670: 99.2234% ( 3) 00:12:20.158 28960.670 - 29085.501: 99.2609% ( 4) 00:12:20.158 29085.501 - 29210.331: 99.2889% ( 3) 00:12:20.158 29210.331 - 29335.162: 99.3170% ( 3) 00:12:20.158 29335.162 - 29459.992: 99.3451% ( 3) 00:12:20.158 29459.992 - 29584.823: 99.3731% ( 3) 00:12:20.158 29584.823 - 29709.653: 99.4012% ( 3) 00:12:20.158 35951.177 - 36200.838: 99.4480% ( 5) 00:12:20.158 36200.838 - 36450.499: 99.5041% ( 6) 00:12:20.158 36450.499 - 36700.160: 99.5696% ( 7) 00:12:20.158 36700.160 - 36949.821: 99.6351% ( 7) 00:12:20.158 36949.821 - 37199.482: 99.6912% ( 6) 00:12:20.158 37199.482 - 37449.143: 99.7567% ( 7) 00:12:20.158 37449.143 - 37698.804: 99.8222% ( 7) 00:12:20.158 37698.804 - 37948.465: 99.8971% ( 8) 00:12:20.158 37948.465 - 38198.126: 99.9532% ( 6) 00:12:20.158 38198.126 - 38447.787: 100.0000% ( 5) 00:12:20.158 00:12:20.158 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:20.158 ============================================================================== 00:12:20.158 Range in us Cumulative IO count 00:12:20.158 8987.794 - 9050.210: 0.0094% ( 1) 00:12:20.158 9050.210 - 9112.625: 0.0187% ( 1) 00:12:20.158 9175.040 - 9237.455: 0.0561% ( 4) 00:12:20.158 9237.455 - 9299.870: 0.1403% ( 9) 00:12:20.158 9299.870 - 9362.286: 0.2994% ( 17) 00:12:20.158 9362.286 - 9424.701: 0.6643% ( 39) 00:12:20.158 9424.701 - 9487.116: 1.2631% ( 64) 00:12:20.158 9487.116 - 9549.531: 1.7122% ( 48) 00:12:20.159 9549.531 - 9611.947: 2.5449% ( 89) 00:12:20.159 9611.947 - 9674.362: 3.4525% ( 97) 00:12:20.159 9674.362 - 9736.777: 4.3132% ( 92) 00:12:20.159 9736.777 - 9799.192: 5.4360% ( 120) 00:12:20.159 9799.192 - 9861.608: 6.6430% ( 129) 00:12:20.159 9861.608 - 9924.023: 8.5704% ( 206) 00:12:20.159 9924.023 - 9986.438: 10.6007% ( 217) 00:12:20.159 9986.438 - 10048.853: 12.6029% ( 214) 00:12:20.159 10048.853 - 10111.269: 15.1291% ( 270) 00:12:20.159 10111.269 - 10173.684: 17.4308% ( 246) 00:12:20.159 10173.684 - 10236.099: 19.8728% ( 261) 00:12:20.159 10236.099 - 10298.514: 22.2399% ( 253) 00:12:20.159 10298.514 - 10360.930: 23.8772% ( 175) 00:12:20.159 10360.930 - 10423.345: 25.5707% ( 181) 00:12:20.159 10423.345 - 10485.760: 26.9929% ( 152) 00:12:20.159 10485.760 - 10548.175: 28.5367% ( 165) 00:12:20.159 10548.175 - 10610.590: 30.3050% ( 189) 00:12:20.159 10610.590 - 10673.006: 31.5588% ( 134) 00:12:20.159 10673.006 - 10735.421: 32.7189% ( 124) 00:12:20.159 10735.421 - 10797.836: 34.1224% ( 150) 00:12:20.159 10797.836 - 10860.251: 35.2264% ( 118) 00:12:20.159 10860.251 - 10922.667: 36.8451% ( 173) 00:12:20.159 10922.667 - 10985.082: 38.3795% ( 164) 00:12:20.159 10985.082 - 11047.497: 39.6145% ( 132) 00:12:20.159 11047.497 - 11109.912: 41.2051% ( 170) 00:12:20.159 11109.912 - 11172.328: 42.9547% ( 187) 00:12:20.159 11172.328 - 11234.743: 44.9008% ( 208) 00:12:20.159 11234.743 - 11297.158: 47.1089% ( 236) 00:12:20.159 11297.158 - 11359.573: 49.3825% ( 243) 00:12:20.159 11359.573 - 11421.989: 51.8245% ( 261) 00:12:20.159 11421.989 - 11484.404: 53.8454% ( 216) 00:12:20.159 11484.404 - 11546.819: 56.1658% ( 248) 00:12:20.159 11546.819 - 11609.234: 58.2242% ( 220) 00:12:20.159 11609.234 - 11671.650: 60.2264% ( 214) 00:12:20.159 11671.650 - 11734.065: 61.8170% ( 170) 00:12:20.159 11734.065 - 11796.480: 63.3795% ( 167) 00:12:20.159 11796.480 - 11858.895: 65.1104% ( 185) 00:12:20.159 11858.895 - 11921.310: 66.5700% ( 156) 00:12:20.159 11921.310 - 11983.726: 67.8424% ( 136) 00:12:20.159 11983.726 - 12046.141: 68.7032% ( 92) 00:12:20.159 12046.141 - 12108.556: 69.4611% ( 81) 00:12:20.159 12108.556 - 12170.971: 70.2283% ( 82) 00:12:20.159 12170.971 - 12233.387: 71.2388% ( 108) 00:12:20.159 12233.387 - 12295.802: 72.0715% ( 89) 00:12:20.159 12295.802 - 12358.217: 72.7919% ( 77) 00:12:20.159 12358.217 - 12420.632: 73.3814% ( 63) 00:12:20.159 12420.632 - 12483.048: 73.8024% ( 45) 00:12:20.159 12483.048 - 12545.463: 74.2047% ( 43) 00:12:20.159 12545.463 - 12607.878: 74.7380% ( 57) 00:12:20.159 12607.878 - 12670.293: 75.1497% ( 44) 00:12:20.159 12670.293 - 12732.709: 75.3368% ( 20) 00:12:20.159 12732.709 - 12795.124: 75.5333% ( 21) 00:12:20.159 12795.124 - 12857.539: 75.7391% ( 22) 00:12:20.159 12857.539 - 12919.954: 76.0853% ( 37) 00:12:20.159 12919.954 - 12982.370: 76.5064% ( 45) 00:12:20.159 12982.370 - 13044.785: 76.8993% ( 42) 00:12:20.159 13044.785 - 13107.200: 77.3952% ( 53) 00:12:20.159 13107.200 - 13169.615: 77.7788% ( 41) 00:12:20.159 13169.615 - 13232.030: 78.1905% ( 44) 00:12:20.159 13232.030 - 13294.446: 78.7706% ( 62) 00:12:20.159 13294.446 - 13356.861: 79.3320% ( 60) 00:12:20.159 13356.861 - 13419.276: 79.9682% ( 68) 00:12:20.159 13419.276 - 13481.691: 80.4828% ( 55) 00:12:20.159 13481.691 - 13544.107: 81.1284% ( 69) 00:12:20.159 13544.107 - 13606.522: 81.7646% ( 68) 00:12:20.159 13606.522 - 13668.937: 82.4476% ( 73) 00:12:20.159 13668.937 - 13731.352: 82.8967% ( 48) 00:12:20.159 13731.352 - 13793.768: 83.4113% ( 55) 00:12:20.159 13793.768 - 13856.183: 83.9540% ( 58) 00:12:20.159 13856.183 - 13918.598: 84.3937% ( 47) 00:12:20.159 13918.598 - 13981.013: 84.7680% ( 40) 00:12:20.159 13981.013 - 14043.429: 85.1329% ( 39) 00:12:20.159 14043.429 - 14105.844: 85.4042% ( 29) 00:12:20.159 14105.844 - 14168.259: 85.7504% ( 37) 00:12:20.159 14168.259 - 14230.674: 86.1246% ( 40) 00:12:20.159 14230.674 - 14293.090: 86.5550% ( 46) 00:12:20.159 14293.090 - 14355.505: 87.0696% ( 55) 00:12:20.159 14355.505 - 14417.920: 87.6591% ( 63) 00:12:20.159 14417.920 - 14480.335: 88.2859% ( 67) 00:12:20.159 14480.335 - 14542.750: 88.9689% ( 73) 00:12:20.159 14542.750 - 14605.166: 89.6145% ( 69) 00:12:20.159 14605.166 - 14667.581: 90.1291% ( 55) 00:12:20.159 14667.581 - 14729.996: 90.7279% ( 64) 00:12:20.159 14729.996 - 14792.411: 91.3548% ( 67) 00:12:20.159 14792.411 - 14854.827: 91.7945% ( 47) 00:12:20.159 14854.827 - 14917.242: 92.2249% ( 46) 00:12:20.159 14917.242 - 14979.657: 92.6272% ( 43) 00:12:20.159 14979.657 - 15042.072: 93.0389% ( 44) 00:12:20.159 15042.072 - 15104.488: 93.4412% ( 43) 00:12:20.159 15104.488 - 15166.903: 93.7687% ( 35) 00:12:20.159 15166.903 - 15229.318: 94.1523% ( 41) 00:12:20.159 15229.318 - 15291.733: 94.4330% ( 30) 00:12:20.159 15291.733 - 15354.149: 94.6950% ( 28) 00:12:20.159 15354.149 - 15416.564: 94.9102% ( 23) 00:12:20.159 15416.564 - 15478.979: 95.1534% ( 26) 00:12:20.159 15478.979 - 15541.394: 95.3499% ( 21) 00:12:20.159 15541.394 - 15603.810: 95.4528% ( 11) 00:12:20.159 15603.810 - 15666.225: 95.5183% ( 7) 00:12:20.159 15666.225 - 15728.640: 95.7148% ( 21) 00:12:20.159 15728.640 - 15791.055: 95.9581% ( 26) 00:12:20.159 15791.055 - 15853.470: 96.3230% ( 39) 00:12:20.159 15853.470 - 15915.886: 96.6504% ( 35) 00:12:20.159 15915.886 - 15978.301: 96.9779% ( 35) 00:12:20.159 15978.301 - 16103.131: 97.3896% ( 44) 00:12:20.159 16103.131 - 16227.962: 97.6422% ( 27) 00:12:20.159 16227.962 - 16352.792: 97.8574% ( 23) 00:12:20.159 16352.792 - 16477.623: 98.1100% ( 27) 00:12:20.159 16477.623 - 16602.453: 98.3533% ( 26) 00:12:20.159 16602.453 - 16727.284: 98.5966% ( 26) 00:12:20.159 16727.284 - 16852.114: 98.7275% ( 14) 00:12:20.159 16852.114 - 16976.945: 98.7556% ( 3) 00:12:20.159 16976.945 - 17101.775: 98.7743% ( 2) 00:12:20.159 17101.775 - 17226.606: 98.8024% ( 3) 00:12:20.159 27213.044 - 27337.874: 98.8118% ( 1) 00:12:20.159 27337.874 - 27462.705: 98.8960% ( 9) 00:12:20.159 27462.705 - 27587.535: 98.9802% ( 9) 00:12:20.159 27587.535 - 27712.366: 99.0363% ( 6) 00:12:20.159 27712.366 - 27837.196: 99.0737% ( 4) 00:12:20.159 27837.196 - 27962.027: 99.1018% ( 3) 00:12:20.159 27962.027 - 28086.857: 99.1299% ( 3) 00:12:20.159 28086.857 - 28211.688: 99.1579% ( 3) 00:12:20.159 28211.688 - 28336.518: 99.1954% ( 4) 00:12:20.159 28336.518 - 28461.349: 99.2234% ( 3) 00:12:20.159 28461.349 - 28586.179: 99.2515% ( 3) 00:12:20.159 28586.179 - 28711.010: 99.2796% ( 3) 00:12:20.159 28711.010 - 28835.840: 99.3170% ( 4) 00:12:20.159 28835.840 - 28960.670: 99.3451% ( 3) 00:12:20.159 28960.670 - 29085.501: 99.3731% ( 3) 00:12:20.159 29085.501 - 29210.331: 99.4012% ( 3) 00:12:20.159 32455.924 - 32705.585: 99.4480% ( 5) 00:12:20.159 32705.585 - 32955.246: 99.5135% ( 7) 00:12:20.159 32955.246 - 33204.907: 99.5696% ( 6) 00:12:20.159 33204.907 - 33454.568: 99.8035% ( 25) 00:12:20.159 33454.568 - 33704.229: 99.9719% ( 18) 00:12:20.159 34952.533 - 35202.194: 99.9813% ( 1) 00:12:20.159 35202.194 - 35451.855: 100.0000% ( 2) 00:12:20.159 00:12:20.159 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:20.159 ============================================================================== 00:12:20.159 Range in us Cumulative IO count 00:12:20.159 8862.964 - 8925.379: 0.0094% ( 1) 00:12:20.159 9050.210 - 9112.625: 0.0281% ( 2) 00:12:20.159 9112.625 - 9175.040: 0.0749% ( 5) 00:12:20.159 9175.040 - 9237.455: 0.1778% ( 11) 00:12:20.159 9237.455 - 9299.870: 0.3275% ( 16) 00:12:20.159 9299.870 - 9362.286: 0.7485% ( 45) 00:12:20.159 9362.286 - 9424.701: 1.0292% ( 30) 00:12:20.159 9424.701 - 9487.116: 1.5812% ( 59) 00:12:20.159 9487.116 - 9549.531: 1.9274% ( 37) 00:12:20.159 9549.531 - 9611.947: 2.4233% ( 53) 00:12:20.159 9611.947 - 9674.362: 3.2279% ( 86) 00:12:20.159 9674.362 - 9736.777: 4.2197% ( 106) 00:12:20.159 9736.777 - 9799.192: 5.2676% ( 112) 00:12:20.159 9799.192 - 9861.608: 6.3903% ( 120) 00:12:20.159 9861.608 - 9924.023: 8.4207% ( 217) 00:12:20.159 9924.023 - 9986.438: 10.5820% ( 231) 00:12:20.159 9986.438 - 10048.853: 12.6029% ( 216) 00:12:20.159 10048.853 - 10111.269: 15.2414% ( 282) 00:12:20.159 10111.269 - 10173.684: 17.8986% ( 284) 00:12:20.159 10173.684 - 10236.099: 19.9663% ( 221) 00:12:20.159 10236.099 - 10298.514: 21.7721% ( 193) 00:12:20.159 10298.514 - 10360.930: 24.0831% ( 247) 00:12:20.159 10360.930 - 10423.345: 25.7579% ( 179) 00:12:20.159 10423.345 - 10485.760: 27.1145% ( 145) 00:12:20.159 10485.760 - 10548.175: 28.4338% ( 141) 00:12:20.159 10548.175 - 10610.590: 29.6969% ( 135) 00:12:20.159 10610.590 - 10673.006: 30.7354% ( 111) 00:12:20.159 10673.006 - 10735.421: 31.6430% ( 97) 00:12:20.159 10735.421 - 10797.836: 32.7376% ( 117) 00:12:20.159 10797.836 - 10860.251: 34.1504% ( 151) 00:12:20.159 10860.251 - 10922.667: 35.7129% ( 167) 00:12:20.159 10922.667 - 10985.082: 37.3222% ( 172) 00:12:20.159 10985.082 - 11047.497: 39.0344% ( 183) 00:12:20.159 11047.497 - 11109.912: 40.9244% ( 202) 00:12:20.159 11109.912 - 11172.328: 42.8986% ( 211) 00:12:20.159 11172.328 - 11234.743: 45.0692% ( 232) 00:12:20.159 11234.743 - 11297.158: 47.1276% ( 220) 00:12:20.159 11297.158 - 11359.573: 49.5790% ( 262) 00:12:20.159 11359.573 - 11421.989: 51.6841% ( 225) 00:12:20.159 11421.989 - 11484.404: 54.1916% ( 268) 00:12:20.159 11484.404 - 11546.819: 56.7272% ( 271) 00:12:20.159 11546.819 - 11609.234: 58.8043% ( 222) 00:12:20.159 11609.234 - 11671.650: 60.6194% ( 194) 00:12:20.159 11671.650 - 11734.065: 62.1070% ( 159) 00:12:20.159 11734.065 - 11796.480: 63.5105% ( 150) 00:12:20.159 11796.480 - 11858.895: 64.9607% ( 155) 00:12:20.159 11858.895 - 11921.310: 66.3641% ( 150) 00:12:20.159 11921.310 - 11983.726: 67.5711% ( 129) 00:12:20.159 11983.726 - 12046.141: 68.7406% ( 125) 00:12:20.160 12046.141 - 12108.556: 69.7698% ( 110) 00:12:20.160 12108.556 - 12170.971: 70.8084% ( 111) 00:12:20.160 12170.971 - 12233.387: 71.5943% ( 84) 00:12:20.160 12233.387 - 12295.802: 72.3335% ( 79) 00:12:20.160 12295.802 - 12358.217: 72.8948% ( 60) 00:12:20.160 12358.217 - 12420.632: 73.3626% ( 50) 00:12:20.160 12420.632 - 12483.048: 73.7930% ( 46) 00:12:20.160 12483.048 - 12545.463: 74.2328% ( 47) 00:12:20.160 12545.463 - 12607.878: 74.5883% ( 38) 00:12:20.160 12607.878 - 12670.293: 75.0000% ( 44) 00:12:20.160 12670.293 - 12732.709: 75.3462% ( 37) 00:12:20.160 12732.709 - 12795.124: 75.6269% ( 30) 00:12:20.160 12795.124 - 12857.539: 75.8888% ( 28) 00:12:20.160 12857.539 - 12919.954: 76.1040% ( 23) 00:12:20.160 12919.954 - 12982.370: 76.3754% ( 29) 00:12:20.160 12982.370 - 13044.785: 76.6093% ( 25) 00:12:20.160 13044.785 - 13107.200: 76.8619% ( 27) 00:12:20.160 13107.200 - 13169.615: 77.1707% ( 33) 00:12:20.160 13169.615 - 13232.030: 77.6010% ( 46) 00:12:20.160 13232.030 - 13294.446: 78.1811% ( 62) 00:12:20.160 13294.446 - 13356.861: 78.9296% ( 80) 00:12:20.160 13356.861 - 13419.276: 79.6688% ( 79) 00:12:20.160 13419.276 - 13481.691: 80.5015% ( 89) 00:12:20.160 13481.691 - 13544.107: 81.0909% ( 63) 00:12:20.160 13544.107 - 13606.522: 81.6804% ( 63) 00:12:20.160 13606.522 - 13668.937: 82.2137% ( 57) 00:12:20.160 13668.937 - 13731.352: 82.8125% ( 64) 00:12:20.160 13731.352 - 13793.768: 83.3177% ( 54) 00:12:20.160 13793.768 - 13856.183: 83.8791% ( 60) 00:12:20.160 13856.183 - 13918.598: 84.4966% ( 66) 00:12:20.160 13918.598 - 13981.013: 85.0019% ( 54) 00:12:20.160 13981.013 - 14043.429: 85.4042% ( 43) 00:12:20.160 14043.429 - 14105.844: 85.7129% ( 33) 00:12:20.160 14105.844 - 14168.259: 86.0591% ( 37) 00:12:20.160 14168.259 - 14230.674: 86.4895% ( 46) 00:12:20.160 14230.674 - 14293.090: 86.7702% ( 30) 00:12:20.160 14293.090 - 14355.505: 87.3129% ( 58) 00:12:20.160 14355.505 - 14417.920: 87.8181% ( 54) 00:12:20.160 14417.920 - 14480.335: 88.3701% ( 59) 00:12:20.160 14480.335 - 14542.750: 88.8941% ( 56) 00:12:20.160 14542.750 - 14605.166: 89.4180% ( 56) 00:12:20.160 14605.166 - 14667.581: 90.0636% ( 69) 00:12:20.160 14667.581 - 14729.996: 90.8028% ( 79) 00:12:20.160 14729.996 - 14792.411: 91.4671% ( 71) 00:12:20.160 14792.411 - 14854.827: 92.0191% ( 59) 00:12:20.160 14854.827 - 14917.242: 92.5150% ( 53) 00:12:20.160 14917.242 - 14979.657: 92.9454% ( 46) 00:12:20.160 14979.657 - 15042.072: 93.3196% ( 40) 00:12:20.160 15042.072 - 15104.488: 93.7126% ( 42) 00:12:20.160 15104.488 - 15166.903: 94.0775% ( 39) 00:12:20.160 15166.903 - 15229.318: 94.4330% ( 38) 00:12:20.160 15229.318 - 15291.733: 94.7979% ( 39) 00:12:20.160 15291.733 - 15354.149: 95.0786% ( 30) 00:12:20.160 15354.149 - 15416.564: 95.2657% ( 20) 00:12:20.160 15416.564 - 15478.979: 95.4341% ( 18) 00:12:20.160 15478.979 - 15541.394: 95.6306% ( 21) 00:12:20.160 15541.394 - 15603.810: 95.7897% ( 17) 00:12:20.160 15603.810 - 15666.225: 95.8926% ( 11) 00:12:20.160 15666.225 - 15728.640: 96.0704% ( 19) 00:12:20.160 15728.640 - 15791.055: 96.3698% ( 32) 00:12:20.160 15791.055 - 15853.470: 96.6972% ( 35) 00:12:20.160 15853.470 - 15915.886: 96.9124% ( 23) 00:12:20.160 15915.886 - 15978.301: 97.1089% ( 21) 00:12:20.160 15978.301 - 16103.131: 97.3896% ( 30) 00:12:20.160 16103.131 - 16227.962: 97.5580% ( 18) 00:12:20.160 16227.962 - 16352.792: 97.6984% ( 15) 00:12:20.160 16352.792 - 16477.623: 97.8574% ( 17) 00:12:20.160 16477.623 - 16602.453: 98.0258% ( 18) 00:12:20.160 16602.453 - 16727.284: 98.2504% ( 24) 00:12:20.160 16727.284 - 16852.114: 98.3252% ( 8) 00:12:20.160 16852.114 - 16976.945: 98.3720% ( 5) 00:12:20.160 16976.945 - 17101.775: 98.4281% ( 6) 00:12:20.160 17101.775 - 17226.606: 98.5124% ( 9) 00:12:20.160 17226.606 - 17351.436: 98.6433% ( 14) 00:12:20.160 17351.436 - 17476.267: 98.6995% ( 6) 00:12:20.160 17476.267 - 17601.097: 98.7556% ( 6) 00:12:20.160 17601.097 - 17725.928: 98.7837% ( 3) 00:12:20.160 17725.928 - 17850.758: 98.8024% ( 2) 00:12:20.160 24341.943 - 24466.773: 98.8960% ( 10) 00:12:20.160 24466.773 - 24591.604: 98.9895% ( 10) 00:12:20.160 24591.604 - 24716.434: 99.0457% ( 6) 00:12:20.160 24716.434 - 24841.265: 99.0831% ( 4) 00:12:20.160 24841.265 - 24966.095: 99.1112% ( 3) 00:12:20.160 24966.095 - 25090.926: 99.1299% ( 2) 00:12:20.160 25090.926 - 25215.756: 99.1579% ( 3) 00:12:20.160 25215.756 - 25340.587: 99.1860% ( 3) 00:12:20.160 25340.587 - 25465.417: 99.2141% ( 3) 00:12:20.160 25465.417 - 25590.248: 99.2328% ( 2) 00:12:20.160 25590.248 - 25715.078: 99.2609% ( 3) 00:12:20.160 25715.078 - 25839.909: 99.2889% ( 3) 00:12:20.160 25839.909 - 25964.739: 99.3076% ( 2) 00:12:20.160 25964.739 - 26089.570: 99.3451% ( 4) 00:12:20.160 26089.570 - 26214.400: 99.3731% ( 3) 00:12:20.160 26214.400 - 26339.230: 99.3918% ( 2) 00:12:20.160 26339.230 - 26464.061: 99.4012% ( 1) 00:12:20.160 30333.806 - 30458.636: 99.4386% ( 4) 00:12:20.160 30458.636 - 30583.467: 99.4854% ( 5) 00:12:20.160 30583.467 - 30708.297: 99.5790% ( 10) 00:12:20.160 31956.602 - 32206.263: 99.5977% ( 2) 00:12:20.160 32206.263 - 32455.924: 99.6632% ( 7) 00:12:20.160 32455.924 - 32705.585: 99.7380% ( 8) 00:12:20.160 32705.585 - 32955.246: 99.7942% ( 6) 00:12:20.160 32955.246 - 33204.907: 99.8597% ( 7) 00:12:20.160 33204.907 - 33454.568: 99.9158% ( 6) 00:12:20.160 33454.568 - 33704.229: 99.9719% ( 6) 00:12:20.160 33704.229 - 33953.890: 100.0000% ( 3) 00:12:20.160 00:12:20.160 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:20.160 ============================================================================== 00:12:20.160 Range in us Cumulative IO count 00:12:20.160 8738.133 - 8800.549: 0.0094% ( 1) 00:12:20.160 8925.379 - 8987.794: 0.0187% ( 1) 00:12:20.160 9050.210 - 9112.625: 0.0655% ( 5) 00:12:20.160 9112.625 - 9175.040: 0.1029% ( 4) 00:12:20.160 9175.040 - 9237.455: 0.1684% ( 7) 00:12:20.160 9237.455 - 9299.870: 0.3462% ( 19) 00:12:20.160 9299.870 - 9362.286: 0.7579% ( 44) 00:12:20.160 9362.286 - 9424.701: 1.0105% ( 27) 00:12:20.160 9424.701 - 9487.116: 1.4222% ( 44) 00:12:20.160 9487.116 - 9549.531: 1.9742% ( 59) 00:12:20.160 9549.531 - 9611.947: 2.5730% ( 64) 00:12:20.160 9611.947 - 9674.362: 3.0314% ( 49) 00:12:20.160 9674.362 - 9736.777: 4.0606% ( 110) 00:12:20.160 9736.777 - 9799.192: 5.0992% ( 111) 00:12:20.160 9799.192 - 9861.608: 6.3903% ( 138) 00:12:20.160 9861.608 - 9924.023: 8.1119% ( 184) 00:12:20.160 9924.023 - 9986.438: 10.1048% ( 213) 00:12:20.160 9986.438 - 10048.853: 12.2006% ( 224) 00:12:20.160 10048.853 - 10111.269: 14.3806% ( 233) 00:12:20.160 10111.269 - 10173.684: 16.4764% ( 224) 00:12:20.160 10173.684 - 10236.099: 18.4225% ( 208) 00:12:20.160 10236.099 - 10298.514: 20.8177% ( 256) 00:12:20.160 10298.514 - 10360.930: 23.6153% ( 299) 00:12:20.160 10360.930 - 10423.345: 26.2818% ( 285) 00:12:20.160 10423.345 - 10485.760: 27.9004% ( 173) 00:12:20.160 10485.760 - 10548.175: 29.6501% ( 187) 00:12:20.160 10548.175 - 10610.590: 30.8945% ( 133) 00:12:20.160 10610.590 - 10673.006: 32.1388% ( 133) 00:12:20.160 10673.006 - 10735.421: 33.0651% ( 99) 00:12:20.160 10735.421 - 10797.836: 34.3469% ( 137) 00:12:20.160 10797.836 - 10860.251: 35.4229% ( 115) 00:12:20.160 10860.251 - 10922.667: 36.8076% ( 148) 00:12:20.160 10922.667 - 10985.082: 38.3701% ( 167) 00:12:20.160 10985.082 - 11047.497: 39.8765% ( 161) 00:12:20.160 11047.497 - 11109.912: 42.0097% ( 228) 00:12:20.160 11109.912 - 11172.328: 43.6097% ( 171) 00:12:20.160 11172.328 - 11234.743: 45.2283% ( 173) 00:12:20.160 11234.743 - 11297.158: 47.6984% ( 264) 00:12:20.160 11297.158 - 11359.573: 50.1123% ( 258) 00:12:20.160 11359.573 - 11421.989: 52.2923% ( 233) 00:12:20.160 11421.989 - 11484.404: 54.6033% ( 247) 00:12:20.160 11484.404 - 11546.819: 56.4465% ( 197) 00:12:20.160 11546.819 - 11609.234: 57.8031% ( 145) 00:12:20.160 11609.234 - 11671.650: 59.3282% ( 163) 00:12:20.160 11671.650 - 11734.065: 61.0591% ( 185) 00:12:20.160 11734.065 - 11796.480: 62.6403% ( 169) 00:12:20.160 11796.480 - 11858.895: 63.8941% ( 134) 00:12:20.160 11858.895 - 11921.310: 65.4379% ( 165) 00:12:20.160 11921.310 - 11983.726: 66.9629% ( 163) 00:12:20.160 11983.726 - 12046.141: 68.4225% ( 156) 00:12:20.160 12046.141 - 12108.556: 69.6482% ( 131) 00:12:20.160 12108.556 - 12170.971: 70.5838% ( 100) 00:12:20.160 12170.971 - 12233.387: 71.6130% ( 110) 00:12:20.161 12233.387 - 12295.802: 72.3709% ( 81) 00:12:20.161 12295.802 - 12358.217: 73.1755% ( 86) 00:12:20.161 12358.217 - 12420.632: 73.7369% ( 60) 00:12:20.161 12420.632 - 12483.048: 74.1860% ( 48) 00:12:20.161 12483.048 - 12545.463: 74.5603% ( 40) 00:12:20.161 12545.463 - 12607.878: 74.8409% ( 30) 00:12:20.161 12607.878 - 12670.293: 75.0561% ( 23) 00:12:20.161 12670.293 - 12732.709: 75.2433% ( 20) 00:12:20.161 12732.709 - 12795.124: 75.3649% ( 13) 00:12:20.161 12795.124 - 12857.539: 75.5894% ( 24) 00:12:20.161 12857.539 - 12919.954: 75.8327% ( 26) 00:12:20.161 12919.954 - 12982.370: 76.1602% ( 35) 00:12:20.161 12982.370 - 13044.785: 76.5719% ( 44) 00:12:20.161 13044.785 - 13107.200: 77.1239% ( 59) 00:12:20.161 13107.200 - 13169.615: 77.6010% ( 51) 00:12:20.161 13169.615 - 13232.030: 78.0408% ( 47) 00:12:20.161 13232.030 - 13294.446: 78.7238% ( 73) 00:12:20.161 13294.446 - 13356.861: 79.1635% ( 47) 00:12:20.161 13356.861 - 13419.276: 79.7062% ( 58) 00:12:20.161 13419.276 - 13481.691: 80.0898% ( 41) 00:12:20.161 13481.691 - 13544.107: 80.5389% ( 48) 00:12:20.161 13544.107 - 13606.522: 81.0535% ( 55) 00:12:20.161 13606.522 - 13668.937: 81.5681% ( 55) 00:12:20.161 13668.937 - 13731.352: 82.3073% ( 79) 00:12:20.161 13731.352 - 13793.768: 82.9809% ( 72) 00:12:20.161 13793.768 - 13856.183: 83.6265% ( 69) 00:12:20.161 13856.183 - 13918.598: 84.3095% ( 73) 00:12:20.161 13918.598 - 13981.013: 84.7867% ( 51) 00:12:20.161 13981.013 - 14043.429: 85.1516% ( 39) 00:12:20.161 14043.429 - 14105.844: 85.4884% ( 36) 00:12:20.161 14105.844 - 14168.259: 85.9001% ( 44) 00:12:20.161 14168.259 - 14230.674: 86.3585% ( 49) 00:12:20.161 14230.674 - 14293.090: 86.7702% ( 44) 00:12:20.161 14293.090 - 14355.505: 87.2287% ( 49) 00:12:20.161 14355.505 - 14417.920: 87.7152% ( 52) 00:12:20.161 14417.920 - 14480.335: 88.1549% ( 47) 00:12:20.161 14480.335 - 14542.750: 88.6321% ( 51) 00:12:20.161 14542.750 - 14605.166: 89.2216% ( 63) 00:12:20.161 14605.166 - 14667.581: 89.8671% ( 69) 00:12:20.161 14667.581 - 14729.996: 90.6344% ( 82) 00:12:20.161 14729.996 - 14792.411: 91.3361% ( 75) 00:12:20.161 14792.411 - 14854.827: 91.8413% ( 54) 00:12:20.161 14854.827 - 14917.242: 92.3840% ( 58) 00:12:20.161 14917.242 - 14979.657: 92.8518% ( 50) 00:12:20.161 14979.657 - 15042.072: 93.1793% ( 35) 00:12:20.161 15042.072 - 15104.488: 93.5629% ( 41) 00:12:20.161 15104.488 - 15166.903: 93.8249% ( 28) 00:12:20.161 15166.903 - 15229.318: 94.1149% ( 31) 00:12:20.161 15229.318 - 15291.733: 94.4424% ( 35) 00:12:20.161 15291.733 - 15354.149: 94.7418% ( 32) 00:12:20.161 15354.149 - 15416.564: 95.0225% ( 30) 00:12:20.161 15416.564 - 15478.979: 95.3125% ( 31) 00:12:20.161 15478.979 - 15541.394: 95.6306% ( 34) 00:12:20.161 15541.394 - 15603.810: 95.8926% ( 28) 00:12:20.161 15603.810 - 15666.225: 96.0423% ( 16) 00:12:20.161 15666.225 - 15728.640: 96.2201% ( 19) 00:12:20.161 15728.640 - 15791.055: 96.3978% ( 19) 00:12:20.161 15791.055 - 15853.470: 96.5382% ( 15) 00:12:20.161 15853.470 - 15915.886: 96.7159% ( 19) 00:12:20.161 15915.886 - 15978.301: 96.9031% ( 20) 00:12:20.161 15978.301 - 16103.131: 97.1931% ( 31) 00:12:20.161 16103.131 - 16227.962: 97.4083% ( 23) 00:12:20.161 16227.962 - 16352.792: 97.6516% ( 26) 00:12:20.161 16352.792 - 16477.623: 97.8761% ( 24) 00:12:20.161 16477.623 - 16602.453: 98.0165% ( 15) 00:12:20.161 16602.453 - 16727.284: 98.1381% ( 13) 00:12:20.161 16727.284 - 16852.114: 98.2036% ( 7) 00:12:20.161 16976.945 - 17101.775: 98.2410% ( 4) 00:12:20.161 17101.775 - 17226.606: 98.2878% ( 5) 00:12:20.161 17226.606 - 17351.436: 98.3346% ( 5) 00:12:20.161 17351.436 - 17476.267: 98.3907% ( 6) 00:12:20.161 17476.267 - 17601.097: 98.4375% ( 5) 00:12:20.161 17601.097 - 17725.928: 98.4843% ( 5) 00:12:20.161 17725.928 - 17850.758: 98.5311% ( 5) 00:12:20.161 17850.758 - 17975.589: 98.5685% ( 4) 00:12:20.161 17975.589 - 18100.419: 98.6153% ( 5) 00:12:20.161 18100.419 - 18225.250: 98.6995% ( 9) 00:12:20.161 18225.250 - 18350.080: 98.7556% ( 6) 00:12:20.161 18350.080 - 18474.910: 98.7837% ( 3) 00:12:20.161 18474.910 - 18599.741: 98.7930% ( 1) 00:12:20.161 18599.741 - 18724.571: 98.8024% ( 1) 00:12:20.161 21970.164 - 22094.994: 98.8118% ( 1) 00:12:20.161 22094.994 - 22219.825: 98.8679% ( 6) 00:12:20.161 22219.825 - 22344.655: 99.0831% ( 23) 00:12:20.161 22344.655 - 22469.486: 99.1112% ( 3) 00:12:20.161 22469.486 - 22594.316: 99.1299% ( 2) 00:12:20.161 22594.316 - 22719.147: 99.1579% ( 3) 00:12:20.161 22719.147 - 22843.977: 99.1860% ( 3) 00:12:20.161 22843.977 - 22968.808: 99.2047% ( 2) 00:12:20.161 22968.808 - 23093.638: 99.2328% ( 3) 00:12:20.161 23093.638 - 23218.469: 99.2515% ( 2) 00:12:20.161 23218.469 - 23343.299: 99.2889% ( 4) 00:12:20.161 23343.299 - 23468.130: 99.3170% ( 3) 00:12:20.161 23468.130 - 23592.960: 99.3451% ( 3) 00:12:20.161 23592.960 - 23717.790: 99.3731% ( 3) 00:12:20.161 23717.790 - 23842.621: 99.4012% ( 3) 00:12:20.161 27837.196 - 27962.027: 99.4760% ( 8) 00:12:20.161 29584.823 - 29709.653: 99.5041% ( 3) 00:12:20.161 29709.653 - 29834.484: 99.5322% ( 3) 00:12:20.161 29834.484 - 29959.314: 99.5696% ( 4) 00:12:20.161 29959.314 - 30084.145: 99.5977% ( 3) 00:12:20.161 30084.145 - 30208.975: 99.6351% ( 4) 00:12:20.161 30208.975 - 30333.806: 99.6725% ( 4) 00:12:20.161 30333.806 - 30458.636: 99.7100% ( 4) 00:12:20.161 30458.636 - 30583.467: 99.7380% ( 3) 00:12:20.161 30583.467 - 30708.297: 99.7754% ( 4) 00:12:20.161 30708.297 - 30833.128: 99.8129% ( 4) 00:12:20.161 30833.128 - 30957.958: 99.8409% ( 3) 00:12:20.161 30957.958 - 31082.789: 99.8784% ( 4) 00:12:20.161 31082.789 - 31207.619: 99.9064% ( 3) 00:12:20.161 31207.619 - 31332.450: 99.9439% ( 4) 00:12:20.161 31332.450 - 31457.280: 99.9813% ( 4) 00:12:20.161 31457.280 - 31582.110: 100.0000% ( 2) 00:12:20.161 00:12:20.420 20:40:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:20.420 00:12:20.420 real 0m2.892s 00:12:20.420 user 0m2.382s 00:12:20.420 sys 0m0.393s 00:12:20.420 20:40:15 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.420 20:40:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 ************************************ 00:12:20.420 END TEST nvme_perf 00:12:20.420 ************************************ 00:12:20.420 20:40:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:20.420 20:40:15 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.420 20:40:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.420 20:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.420 ************************************ 00:12:20.420 START TEST nvme_hello_world 00:12:20.420 ************************************ 00:12:20.420 20:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:20.679 Initializing NVMe Controllers 00:12:20.679 Attached to 0000:00:10.0 00:12:20.679 Namespace ID: 1 size: 6GB 00:12:20.679 Attached to 0000:00:11.0 00:12:20.679 Namespace ID: 1 size: 5GB 00:12:20.679 Attached to 0000:00:13.0 00:12:20.679 Namespace ID: 1 size: 1GB 00:12:20.679 Attached to 0000:00:12.0 00:12:20.679 Namespace ID: 1 size: 4GB 00:12:20.679 Namespace ID: 2 size: 4GB 00:12:20.679 Namespace ID: 3 size: 4GB 00:12:20.679 Initialization complete. 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 INFO: using host memory buffer for IO 00:12:20.679 Hello world! 00:12:20.679 00:12:20.679 real 0m0.405s 00:12:20.679 user 0m0.163s 00:12:20.679 sys 0m0.195s 00:12:20.679 20:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.679 20:40:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:20.679 ************************************ 00:12:20.679 END TEST nvme_hello_world 00:12:20.679 ************************************ 00:12:20.938 20:40:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:20.938 20:40:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.938 20:40:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.938 20:40:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.938 ************************************ 00:12:20.938 START TEST nvme_sgl 00:12:20.938 ************************************ 00:12:20.938 20:40:15 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:21.197 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:21.197 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:21.197 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:21.197 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:21.197 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:21.197 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:21.197 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:21.197 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:21.197 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:21.456 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:21.456 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:21.456 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:21.456 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:21.456 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:21.456 NVMe Readv/Writev Request test 00:12:21.456 Attached to 0000:00:10.0 00:12:21.456 Attached to 0000:00:11.0 00:12:21.456 Attached to 0000:00:13.0 00:12:21.456 Attached to 0000:00:12.0 00:12:21.456 0000:00:10.0: build_io_request_2 test passed 00:12:21.456 0000:00:10.0: build_io_request_4 test passed 00:12:21.456 0000:00:10.0: build_io_request_5 test passed 00:12:21.456 0000:00:10.0: build_io_request_6 test passed 00:12:21.456 0000:00:10.0: build_io_request_7 test passed 00:12:21.456 0000:00:10.0: build_io_request_10 test passed 00:12:21.456 0000:00:11.0: build_io_request_2 test passed 00:12:21.456 0000:00:11.0: build_io_request_4 test passed 00:12:21.456 0000:00:11.0: build_io_request_5 test passed 00:12:21.456 0000:00:11.0: build_io_request_6 test passed 00:12:21.456 0000:00:11.0: build_io_request_7 test passed 00:12:21.456 0000:00:11.0: build_io_request_10 test passed 00:12:21.456 Cleaning up... 00:12:21.457 00:12:21.457 real 0m0.554s 00:12:21.457 user 0m0.271s 00:12:21.457 sys 0m0.226s 00:12:21.457 20:40:16 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.457 20:40:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:21.457 ************************************ 00:12:21.457 END TEST nvme_sgl 00:12:21.457 ************************************ 00:12:21.457 20:40:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:21.457 20:40:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.457 20:40:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.457 20:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.457 ************************************ 00:12:21.457 START TEST nvme_e2edp 00:12:21.457 ************************************ 00:12:21.457 20:40:16 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:22.049 NVMe Write/Read with End-to-End data protection test 00:12:22.049 Attached to 0000:00:10.0 00:12:22.049 Attached to 0000:00:11.0 00:12:22.049 Attached to 0000:00:13.0 00:12:22.049 Attached to 0000:00:12.0 00:12:22.049 Cleaning up... 00:12:22.049 00:12:22.049 real 0m0.395s 00:12:22.049 user 0m0.154s 00:12:22.049 sys 0m0.189s 00:12:22.049 20:40:16 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.049 20:40:16 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:22.049 ************************************ 00:12:22.049 END TEST nvme_e2edp 00:12:22.049 ************************************ 00:12:22.049 20:40:16 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:22.049 20:40:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.049 20:40:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.049 20:40:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.049 ************************************ 00:12:22.049 START TEST nvme_reserve 00:12:22.049 ************************************ 00:12:22.049 20:40:16 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:22.307 ===================================================== 00:12:22.307 NVMe Controller at PCI bus 0, device 16, function 0 00:12:22.307 ===================================================== 00:12:22.307 Reservations: Not Supported 00:12:22.307 ===================================================== 00:12:22.307 NVMe Controller at PCI bus 0, device 17, function 0 00:12:22.307 ===================================================== 00:12:22.307 Reservations: Not Supported 00:12:22.307 ===================================================== 00:12:22.307 NVMe Controller at PCI bus 0, device 19, function 0 00:12:22.307 ===================================================== 00:12:22.307 Reservations: Not Supported 00:12:22.307 ===================================================== 00:12:22.307 NVMe Controller at PCI bus 0, device 18, function 0 00:12:22.307 ===================================================== 00:12:22.307 Reservations: Not Supported 00:12:22.307 Reservation test passed 00:12:22.307 00:12:22.307 real 0m0.389s 00:12:22.307 user 0m0.136s 00:12:22.307 sys 0m0.205s 00:12:22.307 ************************************ 00:12:22.307 END TEST nvme_reserve 00:12:22.307 ************************************ 00:12:22.307 20:40:17 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.307 20:40:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:22.307 20:40:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:22.307 20:40:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.307 20:40:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.307 20:40:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.307 ************************************ 00:12:22.307 START TEST nvme_err_injection 00:12:22.307 ************************************ 00:12:22.307 20:40:17 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:22.879 NVMe Error Injection test 00:12:22.879 Attached to 0000:00:10.0 00:12:22.879 Attached to 0000:00:11.0 00:12:22.879 Attached to 0000:00:13.0 00:12:22.879 Attached to 0000:00:12.0 00:12:22.879 0000:00:10.0: get features failed as expected 00:12:22.879 0000:00:11.0: get features failed as expected 00:12:22.879 0000:00:13.0: get features failed as expected 00:12:22.879 0000:00:12.0: get features failed as expected 00:12:22.879 0000:00:10.0: get features successfully as expected 00:12:22.879 0000:00:11.0: get features successfully as expected 00:12:22.879 0000:00:13.0: get features successfully as expected 00:12:22.879 0000:00:12.0: get features successfully as expected 00:12:22.879 0000:00:10.0: read failed as expected 00:12:22.879 0000:00:11.0: read failed as expected 00:12:22.879 0000:00:13.0: read failed as expected 00:12:22.879 0000:00:12.0: read failed as expected 00:12:22.879 0000:00:10.0: read successfully as expected 00:12:22.879 0000:00:11.0: read successfully as expected 00:12:22.879 0000:00:13.0: read successfully as expected 00:12:22.879 0000:00:12.0: read successfully as expected 00:12:22.879 Cleaning up... 00:12:22.879 00:12:22.879 real 0m0.407s 00:12:22.879 user 0m0.158s 00:12:22.879 sys 0m0.202s 00:12:22.879 ************************************ 00:12:22.879 END TEST nvme_err_injection 00:12:22.879 ************************************ 00:12:22.879 20:40:17 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.879 20:40:17 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:22.879 20:40:17 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:22.879 20:40:17 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:22.879 20:40:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.879 20:40:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.879 ************************************ 00:12:22.879 START TEST nvme_overhead 00:12:22.879 ************************************ 00:12:22.879 20:40:17 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:24.257 Initializing NVMe Controllers 00:12:24.257 Attached to 0000:00:10.0 00:12:24.257 Attached to 0000:00:11.0 00:12:24.257 Attached to 0000:00:13.0 00:12:24.257 Attached to 0000:00:12.0 00:12:24.257 Initialization complete. Launching workers. 00:12:24.257 submit (in ns) avg, min, max = 17273.8, 12038.1, 96295.2 00:12:24.257 complete (in ns) avg, min, max = 12357.3, 8316.2, 66220.0 00:12:24.257 00:12:24.257 Submit histogram 00:12:24.257 ================ 00:12:24.257 Range in us Cumulative Count 00:12:24.257 12.008 - 12.069: 0.0101% ( 1) 00:12:24.257 12.069 - 12.130: 0.0405% ( 3) 00:12:24.257 12.130 - 12.190: 0.1419% ( 10) 00:12:24.257 12.190 - 12.251: 0.5574% ( 41) 00:12:24.257 12.251 - 12.312: 1.4087% ( 84) 00:12:24.257 12.312 - 12.373: 2.4324% ( 101) 00:12:24.257 12.373 - 12.434: 3.6181% ( 117) 00:12:24.257 12.434 - 12.495: 4.6012% ( 97) 00:12:24.257 12.495 - 12.556: 5.1991% ( 59) 00:12:24.257 12.556 - 12.617: 5.7363% ( 53) 00:12:24.257 12.617 - 12.678: 6.0606% ( 32) 00:12:24.257 12.678 - 12.739: 6.3849% ( 32) 00:12:24.257 12.739 - 12.800: 6.5876% ( 20) 00:12:24.257 12.800 - 12.861: 6.8815% ( 29) 00:12:24.257 12.861 - 12.922: 7.4896% ( 60) 00:12:24.257 12.922 - 12.983: 7.9659% ( 47) 00:12:24.257 12.983 - 13.044: 8.7058% ( 73) 00:12:24.257 13.044 - 13.105: 9.2429% ( 53) 00:12:24.257 13.105 - 13.166: 9.8612% ( 61) 00:12:24.257 13.166 - 13.227: 10.3780% ( 51) 00:12:24.257 13.227 - 13.288: 11.1483% ( 76) 00:12:24.257 13.288 - 13.349: 11.8577% ( 70) 00:12:24.257 13.349 - 13.410: 12.5773% ( 71) 00:12:24.257 13.410 - 13.470: 13.0536% ( 47) 00:12:24.257 13.470 - 13.531: 13.4387% ( 38) 00:12:24.257 13.531 - 13.592: 13.8036% ( 36) 00:12:24.257 13.592 - 13.653: 13.9759% ( 17) 00:12:24.257 13.653 - 13.714: 14.1583% ( 18) 00:12:24.257 13.714 - 13.775: 14.2698% ( 11) 00:12:24.257 13.775 - 13.836: 14.4421% ( 17) 00:12:24.257 13.836 - 13.897: 14.7259% ( 28) 00:12:24.257 13.897 - 13.958: 15.4049% ( 67) 00:12:24.257 13.958 - 14.019: 16.1853% ( 77) 00:12:24.257 14.019 - 14.080: 17.1075% ( 91) 00:12:24.257 14.080 - 14.141: 18.0399% ( 92) 00:12:24.257 14.141 - 14.202: 18.9622% ( 91) 00:12:24.257 14.202 - 14.263: 19.8845% ( 91) 00:12:24.257 14.263 - 14.324: 21.3439% ( 144) 00:12:24.257 14.324 - 14.385: 23.3100% ( 194) 00:12:24.257 14.385 - 14.446: 25.7322% ( 239) 00:12:24.257 14.446 - 14.507: 28.0531% ( 229) 00:12:24.257 14.507 - 14.568: 30.2321% ( 215) 00:12:24.257 14.568 - 14.629: 32.1374% ( 188) 00:12:24.257 14.629 - 14.690: 33.5259% ( 137) 00:12:24.257 14.690 - 14.750: 34.8637% ( 132) 00:12:24.257 14.750 - 14.811: 35.7657% ( 89) 00:12:24.257 14.811 - 14.872: 36.6575% ( 88) 00:12:24.257 14.872 - 14.933: 37.4075% ( 74) 00:12:24.257 14.933 - 14.994: 37.9953% ( 58) 00:12:24.257 14.994 - 15.055: 38.5933% ( 59) 00:12:24.257 15.055 - 15.116: 39.4953% ( 89) 00:12:24.257 15.116 - 15.177: 40.6709% ( 116) 00:12:24.257 15.177 - 15.238: 42.2013% ( 151) 00:12:24.257 15.238 - 15.299: 43.6404% ( 142) 00:12:24.257 15.299 - 15.360: 45.1201% ( 146) 00:12:24.257 15.360 - 15.421: 46.3363% ( 120) 00:12:24.257 15.421 - 15.482: 47.2687% ( 92) 00:12:24.257 15.482 - 15.543: 48.3632% ( 108) 00:12:24.257 15.543 - 15.604: 49.1132% ( 74) 00:12:24.257 15.604 - 15.726: 50.3192% ( 119) 00:12:24.257 15.726 - 15.848: 50.9881% ( 66) 00:12:24.257 15.848 - 15.970: 51.3733% ( 38) 00:12:24.257 15.970 - 16.091: 51.5962% ( 22) 00:12:24.257 16.091 - 16.213: 51.7685% ( 17) 00:12:24.257 16.213 - 16.335: 51.9205% ( 15) 00:12:24.258 16.335 - 16.457: 52.1334% ( 21) 00:12:24.258 16.457 - 16.579: 52.4374% ( 30) 00:12:24.258 16.579 - 16.701: 52.6807% ( 24) 00:12:24.258 16.701 - 16.823: 52.9036% ( 22) 00:12:24.258 16.823 - 16.945: 53.1773% ( 27) 00:12:24.258 16.945 - 17.067: 53.3293% ( 15) 00:12:24.258 17.067 - 17.189: 53.5725% ( 24) 00:12:24.258 17.189 - 17.310: 53.7853% ( 21) 00:12:24.258 17.310 - 17.432: 53.9982% ( 21) 00:12:24.258 17.432 - 17.554: 54.1705% ( 17) 00:12:24.258 17.554 - 17.676: 54.3124% ( 14) 00:12:24.258 17.676 - 17.798: 54.5353% ( 22) 00:12:24.258 17.798 - 17.920: 54.7482% ( 21) 00:12:24.258 17.920 - 18.042: 54.9103% ( 16) 00:12:24.258 18.042 - 18.164: 55.1535% ( 24) 00:12:24.258 18.164 - 18.286: 56.5217% ( 135) 00:12:24.258 18.286 - 18.408: 60.0284% ( 346) 00:12:24.258 18.408 - 18.530: 64.3458% ( 426) 00:12:24.258 18.530 - 18.651: 67.6092% ( 322) 00:12:24.258 18.651 - 18.773: 70.0517% ( 241) 00:12:24.258 18.773 - 18.895: 71.7746% ( 170) 00:12:24.258 18.895 - 19.017: 73.1023% ( 131) 00:12:24.258 19.017 - 19.139: 74.3488% ( 123) 00:12:24.258 19.139 - 19.261: 75.5751% ( 121) 00:12:24.258 19.261 - 19.383: 76.8927% ( 130) 00:12:24.258 19.383 - 19.505: 78.2001% ( 129) 00:12:24.258 19.505 - 19.627: 79.6595% ( 144) 00:12:24.258 19.627 - 19.749: 80.9162% ( 124) 00:12:24.258 19.749 - 19.870: 82.1425% ( 121) 00:12:24.258 19.870 - 19.992: 83.3283% ( 117) 00:12:24.258 19.992 - 20.114: 83.9364% ( 60) 00:12:24.258 20.114 - 20.236: 84.5546% ( 61) 00:12:24.258 20.236 - 20.358: 85.2032% ( 64) 00:12:24.258 20.358 - 20.480: 85.6187% ( 41) 00:12:24.258 20.480 - 20.602: 85.9734% ( 35) 00:12:24.258 20.602 - 20.724: 86.2471% ( 27) 00:12:24.258 20.724 - 20.846: 86.5714% ( 32) 00:12:24.258 20.846 - 20.968: 86.8349% ( 26) 00:12:24.258 20.968 - 21.090: 87.1896% ( 35) 00:12:24.258 21.090 - 21.211: 87.5747% ( 38) 00:12:24.258 21.211 - 21.333: 88.0916% ( 51) 00:12:24.258 21.333 - 21.455: 88.6997% ( 60) 00:12:24.258 21.455 - 21.577: 89.2774% ( 57) 00:12:24.258 21.577 - 21.699: 89.9463% ( 66) 00:12:24.258 21.699 - 21.821: 90.6760% ( 72) 00:12:24.258 21.821 - 21.943: 91.3449% ( 66) 00:12:24.258 21.943 - 22.065: 91.9124% ( 56) 00:12:24.258 22.065 - 22.187: 92.5205% ( 60) 00:12:24.258 22.187 - 22.309: 93.0678% ( 54) 00:12:24.258 22.309 - 22.430: 93.5543% ( 48) 00:12:24.258 22.430 - 22.552: 93.8380% ( 28) 00:12:24.258 22.552 - 22.674: 94.0103% ( 17) 00:12:24.258 22.674 - 22.796: 94.2232% ( 21) 00:12:24.258 22.796 - 22.918: 94.4259% ( 20) 00:12:24.258 22.918 - 23.040: 94.5982% ( 17) 00:12:24.258 23.040 - 23.162: 94.7198% ( 12) 00:12:24.258 23.162 - 23.284: 94.8515% ( 13) 00:12:24.258 23.284 - 23.406: 94.9326% ( 8) 00:12:24.258 23.406 - 23.528: 94.9731% ( 4) 00:12:24.258 23.650 - 23.771: 95.0238% ( 5) 00:12:24.258 23.771 - 23.893: 95.0745% ( 5) 00:12:24.258 23.893 - 24.015: 95.1049% ( 3) 00:12:24.258 24.015 - 24.137: 95.1860% ( 8) 00:12:24.258 24.137 - 24.259: 95.2671% ( 8) 00:12:24.258 24.259 - 24.381: 95.3684% ( 10) 00:12:24.258 24.381 - 24.503: 95.4697% ( 10) 00:12:24.258 24.503 - 24.625: 95.5407% ( 7) 00:12:24.258 24.625 - 24.747: 95.6319% ( 9) 00:12:24.258 24.747 - 24.869: 95.7130% ( 8) 00:12:24.258 24.869 - 24.990: 95.8751% ( 16) 00:12:24.258 24.990 - 25.112: 95.9258% ( 5) 00:12:24.258 25.112 - 25.234: 96.0677% ( 14) 00:12:24.258 25.234 - 25.356: 96.1893% ( 12) 00:12:24.258 25.356 - 25.478: 96.2400% ( 5) 00:12:24.258 25.478 - 25.600: 96.3312% ( 9) 00:12:24.258 25.600 - 25.722: 96.4021% ( 7) 00:12:24.258 25.722 - 25.844: 96.4832% ( 8) 00:12:24.258 25.844 - 25.966: 96.5744% ( 9) 00:12:24.258 25.966 - 26.088: 96.6352% ( 6) 00:12:24.258 26.088 - 26.210: 96.7265% ( 9) 00:12:24.258 26.210 - 26.331: 96.8582% ( 13) 00:12:24.258 26.331 - 26.453: 96.9494% ( 9) 00:12:24.258 26.453 - 26.575: 97.0204% ( 7) 00:12:24.258 26.575 - 26.697: 97.0508% ( 3) 00:12:24.258 26.697 - 26.819: 97.1014% ( 5) 00:12:24.258 26.819 - 26.941: 97.1521% ( 5) 00:12:24.258 26.941 - 27.063: 97.1927% ( 4) 00:12:24.258 27.063 - 27.185: 97.2129% ( 2) 00:12:24.258 27.185 - 27.307: 97.2636% ( 5) 00:12:24.258 27.307 - 27.429: 97.2839% ( 2) 00:12:24.258 27.429 - 27.550: 97.3041% ( 2) 00:12:24.258 27.550 - 27.672: 97.3345% ( 3) 00:12:24.258 27.672 - 27.794: 97.3751% ( 4) 00:12:24.258 27.794 - 27.916: 97.4562% ( 8) 00:12:24.258 27.916 - 28.038: 97.5068% ( 5) 00:12:24.258 28.038 - 28.160: 97.5271% ( 2) 00:12:24.258 28.160 - 28.282: 97.5879% ( 6) 00:12:24.258 28.282 - 28.404: 97.6183% ( 3) 00:12:24.258 28.404 - 28.526: 97.6893% ( 7) 00:12:24.258 28.526 - 28.648: 97.7399% ( 5) 00:12:24.258 28.648 - 28.770: 97.7906% ( 5) 00:12:24.258 28.770 - 28.891: 97.8312% ( 4) 00:12:24.258 28.891 - 29.013: 97.8514% ( 2) 00:12:24.258 29.013 - 29.135: 97.9122% ( 6) 00:12:24.258 29.135 - 29.257: 97.9426% ( 3) 00:12:24.258 29.257 - 29.379: 97.9629% ( 2) 00:12:24.258 29.379 - 29.501: 97.9730% ( 1) 00:12:24.258 29.501 - 29.623: 98.0237% ( 5) 00:12:24.258 29.623 - 29.745: 98.0541% ( 3) 00:12:24.258 29.745 - 29.867: 98.0947% ( 4) 00:12:24.258 29.867 - 29.989: 98.1048% ( 1) 00:12:24.258 29.989 - 30.110: 98.1453% ( 4) 00:12:24.258 30.110 - 30.232: 98.1859% ( 4) 00:12:24.258 30.232 - 30.354: 98.2670% ( 8) 00:12:24.258 30.354 - 30.476: 98.3075% ( 4) 00:12:24.258 30.476 - 30.598: 98.3278% ( 2) 00:12:24.258 30.598 - 30.720: 98.3784% ( 5) 00:12:24.258 30.720 - 30.842: 98.4291% ( 5) 00:12:24.258 30.842 - 30.964: 98.4899% ( 6) 00:12:24.258 30.964 - 31.086: 98.5406% ( 5) 00:12:24.258 31.086 - 31.208: 98.5507% ( 1) 00:12:24.258 31.208 - 31.451: 98.6014% ( 5) 00:12:24.258 31.451 - 31.695: 98.6723% ( 7) 00:12:24.258 31.695 - 31.939: 98.7433% ( 7) 00:12:24.258 31.939 - 32.183: 98.8142% ( 7) 00:12:24.258 32.183 - 32.427: 98.8548% ( 4) 00:12:24.258 32.427 - 32.670: 98.8750% ( 2) 00:12:24.258 32.670 - 32.914: 98.9054% ( 3) 00:12:24.258 32.914 - 33.158: 98.9663% ( 6) 00:12:24.258 33.158 - 33.402: 99.0271% ( 6) 00:12:24.258 33.402 - 33.646: 99.1081% ( 8) 00:12:24.258 33.646 - 33.890: 99.1487% ( 4) 00:12:24.258 33.890 - 34.133: 99.1892% ( 4) 00:12:24.258 34.133 - 34.377: 99.2399% ( 5) 00:12:24.258 34.377 - 34.621: 99.3210% ( 8) 00:12:24.258 34.621 - 34.865: 99.3615% ( 4) 00:12:24.258 34.865 - 35.109: 99.4122% ( 5) 00:12:24.258 35.109 - 35.352: 99.4629% ( 5) 00:12:24.258 35.352 - 35.596: 99.5034% ( 4) 00:12:24.258 35.596 - 35.840: 99.5135% ( 1) 00:12:24.258 35.840 - 36.084: 99.5439% ( 3) 00:12:24.258 36.084 - 36.328: 99.5845% ( 4) 00:12:24.258 36.328 - 36.571: 99.6047% ( 2) 00:12:24.258 36.571 - 36.815: 99.6250% ( 2) 00:12:24.258 36.815 - 37.059: 99.6554% ( 3) 00:12:24.258 37.059 - 37.303: 99.6757% ( 2) 00:12:24.258 37.303 - 37.547: 99.7061% ( 3) 00:12:24.258 37.547 - 37.790: 99.7162% ( 1) 00:12:24.258 38.034 - 38.278: 99.7264% ( 1) 00:12:24.258 38.278 - 38.522: 99.7669% ( 4) 00:12:24.258 38.522 - 38.766: 99.7770% ( 1) 00:12:24.258 38.766 - 39.010: 99.7872% ( 1) 00:12:24.258 39.985 - 40.229: 99.7973% ( 1) 00:12:24.258 40.229 - 40.472: 99.8074% ( 1) 00:12:24.258 40.472 - 40.716: 99.8176% ( 1) 00:12:24.258 40.960 - 41.204: 99.8277% ( 1) 00:12:24.258 41.448 - 41.691: 99.8378% ( 1) 00:12:24.258 42.910 - 43.154: 99.8682% ( 3) 00:12:24.258 43.886 - 44.130: 99.8784% ( 1) 00:12:24.258 45.105 - 45.349: 99.8885% ( 1) 00:12:24.258 46.080 - 46.324: 99.8987% ( 1) 00:12:24.258 46.811 - 47.055: 99.9189% ( 2) 00:12:24.258 47.055 - 47.299: 99.9291% ( 1) 00:12:24.258 49.250 - 49.493: 99.9392% ( 1) 00:12:24.258 51.688 - 51.931: 99.9493% ( 1) 00:12:24.258 53.150 - 53.394: 99.9595% ( 1) 00:12:24.258 62.415 - 62.903: 99.9696% ( 1) 00:12:24.258 63.878 - 64.366: 99.9797% ( 1) 00:12:24.258 67.779 - 68.267: 99.9899% ( 1) 00:12:24.259 96.061 - 96.549: 100.0000% ( 1) 00:12:24.259 00:12:24.259 Complete histogram 00:12:24.259 ================== 00:12:24.259 Range in us Cumulative Count 00:12:24.259 8.290 - 8.350: 0.0507% ( 5) 00:12:24.259 8.350 - 8.411: 0.4155% ( 36) 00:12:24.259 8.411 - 8.472: 1.1452% ( 72) 00:12:24.259 8.472 - 8.533: 2.0776% ( 92) 00:12:24.259 8.533 - 8.594: 3.4458% ( 135) 00:12:24.259 8.594 - 8.655: 4.5809% ( 112) 00:12:24.259 8.655 - 8.716: 5.7768% ( 118) 00:12:24.259 8.716 - 8.777: 6.4457% ( 66) 00:12:24.259 8.777 - 8.838: 6.7498% ( 30) 00:12:24.259 8.838 - 8.899: 7.0437% ( 29) 00:12:24.259 8.899 - 8.960: 7.6112% ( 56) 00:12:24.259 8.960 - 9.021: 8.4220% ( 80) 00:12:24.259 9.021 - 9.082: 9.2125% ( 78) 00:12:24.259 9.082 - 9.143: 10.0233% ( 80) 00:12:24.259 9.143 - 9.204: 10.7023% ( 67) 00:12:24.259 9.204 - 9.265: 11.4422% ( 73) 00:12:24.259 9.265 - 9.326: 12.3949% ( 94) 00:12:24.259 9.326 - 9.387: 13.1246% ( 72) 00:12:24.259 9.387 - 9.448: 13.6313% ( 50) 00:12:24.259 9.448 - 9.509: 13.9657% ( 33) 00:12:24.259 9.509 - 9.570: 14.2090% ( 24) 00:12:24.259 9.570 - 9.630: 14.4725% ( 26) 00:12:24.259 9.630 - 9.691: 15.1008% ( 62) 00:12:24.259 9.691 - 9.752: 16.0231% ( 91) 00:12:24.259 9.752 - 9.813: 16.8643% ( 83) 00:12:24.259 9.813 - 9.874: 17.9183% ( 104) 00:12:24.259 9.874 - 9.935: 19.1041% ( 117) 00:12:24.259 9.935 - 9.996: 21.4452% ( 231) 00:12:24.259 9.996 - 10.057: 24.5465% ( 306) 00:12:24.259 10.057 - 10.118: 27.9923% ( 340) 00:12:24.259 10.118 - 10.179: 30.5057% ( 248) 00:12:24.259 10.179 - 10.240: 32.6239% ( 209) 00:12:24.259 10.240 - 10.301: 34.3772% ( 173) 00:12:24.259 10.301 - 10.362: 35.7049% ( 131) 00:12:24.259 10.362 - 10.423: 36.6170% ( 90) 00:12:24.259 10.423 - 10.484: 37.3264% ( 70) 00:12:24.259 10.484 - 10.545: 38.4615% ( 112) 00:12:24.259 10.545 - 10.606: 40.3871% ( 190) 00:12:24.259 10.606 - 10.667: 42.7891% ( 237) 00:12:24.259 10.667 - 10.728: 44.9174% ( 210) 00:12:24.259 10.728 - 10.789: 46.6707% ( 173) 00:12:24.259 10.789 - 10.850: 47.9274% ( 124) 00:12:24.259 10.850 - 10.910: 48.6571% ( 72) 00:12:24.259 10.910 - 10.971: 49.2450% ( 58) 00:12:24.259 10.971 - 11.032: 49.7314% ( 48) 00:12:24.259 11.032 - 11.093: 49.9949% ( 26) 00:12:24.259 11.093 - 11.154: 50.2382% ( 24) 00:12:24.259 11.154 - 11.215: 50.4307% ( 19) 00:12:24.259 11.215 - 11.276: 50.5726% ( 14) 00:12:24.259 11.276 - 11.337: 50.7449% ( 17) 00:12:24.259 11.337 - 11.398: 50.9679% ( 22) 00:12:24.259 11.398 - 11.459: 51.1807% ( 21) 00:12:24.259 11.459 - 11.520: 51.4138% ( 23) 00:12:24.259 11.520 - 11.581: 51.6165% ( 20) 00:12:24.259 11.581 - 11.642: 51.7989% ( 18) 00:12:24.259 11.642 - 11.703: 52.0016% ( 20) 00:12:24.259 11.703 - 11.764: 52.1840% ( 18) 00:12:24.259 11.764 - 11.825: 52.4070% ( 22) 00:12:24.259 11.825 - 11.886: 52.7719% ( 36) 00:12:24.259 11.886 - 11.947: 53.1164% ( 34) 00:12:24.259 11.947 - 12.008: 53.2482% ( 13) 00:12:24.259 12.008 - 12.069: 53.3394% ( 9) 00:12:24.259 12.069 - 12.130: 53.4205% ( 8) 00:12:24.259 12.130 - 12.190: 53.5016% ( 8) 00:12:24.259 12.190 - 12.251: 53.5320% ( 3) 00:12:24.259 12.251 - 12.312: 53.6536% ( 12) 00:12:24.259 12.312 - 12.373: 53.6941% ( 4) 00:12:24.259 12.373 - 12.434: 53.7752% ( 8) 00:12:24.259 12.434 - 12.495: 53.8563% ( 8) 00:12:24.259 12.495 - 12.556: 53.9272% ( 7) 00:12:24.259 12.556 - 12.617: 54.0083% ( 8) 00:12:24.259 12.617 - 12.678: 54.2211% ( 21) 00:12:24.259 12.678 - 12.739: 54.6063% ( 38) 00:12:24.259 12.739 - 12.800: 54.8596% ( 25) 00:12:24.259 12.800 - 12.861: 55.2346% ( 37) 00:12:24.259 12.861 - 12.922: 55.7211% ( 48) 00:12:24.259 12.922 - 12.983: 56.7447% ( 101) 00:12:24.259 12.983 - 13.044: 58.9237% ( 215) 00:12:24.259 13.044 - 13.105: 62.1263% ( 316) 00:12:24.259 13.105 - 13.166: 65.4100% ( 324) 00:12:24.259 13.166 - 13.227: 67.8322% ( 239) 00:12:24.259 13.227 - 13.288: 69.4639% ( 161) 00:12:24.259 13.288 - 13.349: 70.6699% ( 119) 00:12:24.259 13.349 - 13.410: 71.4604% ( 78) 00:12:24.259 13.410 - 13.470: 72.2003% ( 73) 00:12:24.259 13.470 - 13.531: 72.8084% ( 60) 00:12:24.259 13.531 - 13.592: 73.1833% ( 37) 00:12:24.259 13.592 - 13.653: 73.5685% ( 38) 00:12:24.259 13.653 - 13.714: 74.0043% ( 43) 00:12:24.259 13.714 - 13.775: 74.5313% ( 52) 00:12:24.259 13.775 - 13.836: 75.3116% ( 77) 00:12:24.259 13.836 - 13.897: 76.0616% ( 74) 00:12:24.259 13.897 - 13.958: 76.6292% ( 56) 00:12:24.259 13.958 - 14.019: 77.4602% ( 82) 00:12:24.259 14.019 - 14.080: 78.3115% ( 84) 00:12:24.259 14.080 - 14.141: 78.9703% ( 65) 00:12:24.259 14.141 - 14.202: 79.6899% ( 71) 00:12:24.259 14.202 - 14.263: 80.4500% ( 75) 00:12:24.259 14.263 - 14.324: 81.0581% ( 60) 00:12:24.259 14.324 - 14.385: 81.7270% ( 66) 00:12:24.259 14.385 - 14.446: 82.3249% ( 59) 00:12:24.259 14.446 - 14.507: 82.9837% ( 65) 00:12:24.259 14.507 - 14.568: 83.6830% ( 69) 00:12:24.259 14.568 - 14.629: 84.1289% ( 44) 00:12:24.259 14.629 - 14.690: 84.5647% ( 43) 00:12:24.259 14.690 - 14.750: 85.0410% ( 47) 00:12:24.259 14.750 - 14.811: 85.4464% ( 40) 00:12:24.259 14.811 - 14.872: 85.7809% ( 33) 00:12:24.259 14.872 - 14.933: 86.1863% ( 40) 00:12:24.259 14.933 - 14.994: 86.4802% ( 29) 00:12:24.259 14.994 - 15.055: 86.7842% ( 30) 00:12:24.259 15.055 - 15.116: 87.0883% ( 30) 00:12:24.259 15.116 - 15.177: 87.2504% ( 16) 00:12:24.259 15.177 - 15.238: 87.5241% ( 27) 00:12:24.259 15.238 - 15.299: 87.7268% ( 20) 00:12:24.259 15.299 - 15.360: 87.9903% ( 26) 00:12:24.259 15.360 - 15.421: 88.2436% ( 25) 00:12:24.259 15.421 - 15.482: 88.6794% ( 43) 00:12:24.259 15.482 - 15.543: 89.1051% ( 42) 00:12:24.259 15.543 - 15.604: 89.4497% ( 34) 00:12:24.259 15.604 - 15.726: 90.3415% ( 88) 00:12:24.259 15.726 - 15.848: 91.0510% ( 70) 00:12:24.259 15.848 - 15.970: 91.8820% ( 82) 00:12:24.259 15.970 - 16.091: 92.4293% ( 54) 00:12:24.259 16.091 - 16.213: 92.9462% ( 51) 00:12:24.259 16.213 - 16.335: 93.3820% ( 43) 00:12:24.259 16.335 - 16.457: 93.8583% ( 47) 00:12:24.259 16.457 - 16.579: 94.1928% ( 33) 00:12:24.259 16.579 - 16.701: 94.5069% ( 31) 00:12:24.259 16.701 - 16.823: 94.7704% ( 26) 00:12:24.259 16.823 - 16.945: 95.0340% ( 26) 00:12:24.259 16.945 - 17.067: 95.2873% ( 25) 00:12:24.259 17.067 - 17.189: 95.4089% ( 12) 00:12:24.259 17.189 - 17.310: 95.4697% ( 6) 00:12:24.259 17.310 - 17.432: 95.5711% ( 10) 00:12:24.259 17.432 - 17.554: 95.6623% ( 9) 00:12:24.259 17.554 - 17.676: 95.6927% ( 3) 00:12:24.259 17.676 - 17.798: 95.7535% ( 6) 00:12:24.259 17.798 - 17.920: 95.8143% ( 6) 00:12:24.259 17.920 - 18.042: 95.8650% ( 5) 00:12:24.259 18.042 - 18.164: 95.9258% ( 6) 00:12:24.259 18.164 - 18.286: 95.9765% ( 5) 00:12:24.259 18.286 - 18.408: 96.0170% ( 4) 00:12:24.259 18.408 - 18.530: 96.0778% ( 6) 00:12:24.259 18.530 - 18.651: 96.0981% ( 2) 00:12:24.259 18.651 - 18.773: 96.1995% ( 10) 00:12:24.259 18.773 - 18.895: 96.2603% ( 6) 00:12:24.259 18.895 - 19.017: 96.3109% ( 5) 00:12:24.259 19.017 - 19.139: 96.3819% ( 7) 00:12:24.259 19.139 - 19.261: 96.4021% ( 2) 00:12:24.259 19.261 - 19.383: 96.4427% ( 4) 00:12:24.259 19.383 - 19.505: 96.4832% ( 4) 00:12:24.259 19.505 - 19.627: 96.5136% ( 3) 00:12:24.259 19.627 - 19.749: 96.5440% ( 3) 00:12:24.259 19.749 - 19.870: 96.5542% ( 1) 00:12:24.259 19.870 - 19.992: 96.5744% ( 2) 00:12:24.259 19.992 - 20.114: 96.5846% ( 1) 00:12:24.259 20.114 - 20.236: 96.6352% ( 5) 00:12:24.259 20.236 - 20.358: 96.6454% ( 1) 00:12:24.259 20.358 - 20.480: 96.6555% ( 1) 00:12:24.259 20.480 - 20.602: 96.6657% ( 1) 00:12:24.259 20.602 - 20.724: 96.6758% ( 1) 00:12:24.259 20.724 - 20.846: 96.7771% ( 10) 00:12:24.259 20.846 - 20.968: 96.9393% ( 16) 00:12:24.259 20.968 - 21.090: 97.0001% ( 6) 00:12:24.259 21.090 - 21.211: 97.0508% ( 5) 00:12:24.259 21.211 - 21.333: 97.1623% ( 11) 00:12:24.259 21.333 - 21.455: 97.2433% ( 8) 00:12:24.259 21.455 - 21.577: 97.3143% ( 7) 00:12:24.259 21.577 - 21.699: 97.4359% ( 12) 00:12:24.259 21.699 - 21.821: 97.4967% ( 6) 00:12:24.259 21.821 - 21.943: 97.5676% ( 7) 00:12:24.259 21.943 - 22.065: 97.6487% ( 8) 00:12:24.259 22.065 - 22.187: 97.6791% ( 3) 00:12:24.259 22.187 - 22.309: 97.6994% ( 2) 00:12:24.260 22.309 - 22.430: 97.7197% ( 2) 00:12:24.260 22.430 - 22.552: 97.7399% ( 2) 00:12:24.260 22.674 - 22.796: 97.7501% ( 1) 00:12:24.260 22.796 - 22.918: 97.7602% ( 1) 00:12:24.260 22.918 - 23.040: 97.7805% ( 2) 00:12:24.260 23.040 - 23.162: 97.8210% ( 4) 00:12:24.260 23.162 - 23.284: 97.8312% ( 1) 00:12:24.260 23.284 - 23.406: 97.8717% ( 4) 00:12:24.260 23.406 - 23.528: 97.8920% ( 2) 00:12:24.260 23.528 - 23.650: 97.9224% ( 3) 00:12:24.260 23.650 - 23.771: 97.9325% ( 1) 00:12:24.260 23.771 - 23.893: 97.9528% ( 2) 00:12:24.260 23.893 - 24.015: 98.0034% ( 5) 00:12:24.260 24.015 - 24.137: 98.0136% ( 1) 00:12:24.260 24.137 - 24.259: 98.0440% ( 3) 00:12:24.260 24.259 - 24.381: 98.0541% ( 1) 00:12:24.260 24.381 - 24.503: 98.0643% ( 1) 00:12:24.260 24.503 - 24.625: 98.0845% ( 2) 00:12:24.260 24.625 - 24.747: 98.1251% ( 4) 00:12:24.260 24.747 - 24.869: 98.1859% ( 6) 00:12:24.260 24.869 - 24.990: 98.2264% ( 4) 00:12:24.260 24.990 - 25.112: 98.2771% ( 5) 00:12:24.260 25.112 - 25.234: 98.3075% ( 3) 00:12:24.260 25.234 - 25.356: 98.3379% ( 3) 00:12:24.260 25.356 - 25.478: 98.3582% ( 2) 00:12:24.260 25.478 - 25.600: 98.3886% ( 3) 00:12:24.260 25.600 - 25.722: 98.3987% ( 1) 00:12:24.260 25.722 - 25.844: 98.4291% ( 3) 00:12:24.260 25.844 - 25.966: 98.4392% ( 1) 00:12:24.260 25.966 - 26.088: 98.5102% ( 7) 00:12:24.260 26.088 - 26.210: 98.5406% ( 3) 00:12:24.260 26.210 - 26.331: 98.5811% ( 4) 00:12:24.260 26.331 - 26.453: 98.6318% ( 5) 00:12:24.260 26.453 - 26.575: 98.6419% ( 1) 00:12:24.260 26.575 - 26.697: 98.6622% ( 2) 00:12:24.260 26.697 - 26.819: 98.6926% ( 3) 00:12:24.260 26.941 - 27.063: 98.7027% ( 1) 00:12:24.260 27.063 - 27.185: 98.7230% ( 2) 00:12:24.260 27.185 - 27.307: 98.7433% ( 2) 00:12:24.260 27.307 - 27.429: 98.7636% ( 2) 00:12:24.260 27.429 - 27.550: 98.8750% ( 11) 00:12:24.260 27.550 - 27.672: 98.9257% ( 5) 00:12:24.260 27.672 - 27.794: 98.9358% ( 1) 00:12:24.260 27.916 - 28.038: 98.9561% ( 2) 00:12:24.260 28.038 - 28.160: 98.9865% ( 3) 00:12:24.260 28.160 - 28.282: 99.0068% ( 2) 00:12:24.260 28.282 - 28.404: 99.0372% ( 3) 00:12:24.260 28.404 - 28.526: 99.0777% ( 4) 00:12:24.260 28.526 - 28.648: 99.1081% ( 3) 00:12:24.260 28.648 - 28.770: 99.1487% ( 4) 00:12:24.260 28.770 - 28.891: 99.1588% ( 1) 00:12:24.260 28.891 - 29.013: 99.1892% ( 3) 00:12:24.260 29.013 - 29.135: 99.2500% ( 6) 00:12:24.260 29.135 - 29.257: 99.2804% ( 3) 00:12:24.260 29.257 - 29.379: 99.3007% ( 2) 00:12:24.260 29.379 - 29.501: 99.3615% ( 6) 00:12:24.260 29.501 - 29.623: 99.3818% ( 2) 00:12:24.260 29.623 - 29.745: 99.4122% ( 3) 00:12:24.260 29.745 - 29.867: 99.4730% ( 6) 00:12:24.260 29.867 - 29.989: 99.5135% ( 4) 00:12:24.260 29.989 - 30.110: 99.5237% ( 1) 00:12:24.260 30.110 - 30.232: 99.5541% ( 3) 00:12:24.260 30.232 - 30.354: 99.5743% ( 2) 00:12:24.260 30.354 - 30.476: 99.6047% ( 3) 00:12:24.260 30.476 - 30.598: 99.6250% ( 2) 00:12:24.260 30.598 - 30.720: 99.6351% ( 1) 00:12:24.260 30.720 - 30.842: 99.6656% ( 3) 00:12:24.260 30.842 - 30.964: 99.6858% ( 2) 00:12:24.260 30.964 - 31.086: 99.6960% ( 1) 00:12:24.260 31.086 - 31.208: 99.7365% ( 4) 00:12:24.260 31.208 - 31.451: 99.7466% ( 1) 00:12:24.260 31.451 - 31.695: 99.7669% ( 2) 00:12:24.260 31.695 - 31.939: 99.7770% ( 1) 00:12:24.260 31.939 - 32.183: 99.7872% ( 1) 00:12:24.260 32.183 - 32.427: 99.8074% ( 2) 00:12:24.260 32.427 - 32.670: 99.8176% ( 1) 00:12:24.260 32.670 - 32.914: 99.8277% ( 1) 00:12:24.260 32.914 - 33.158: 99.8378% ( 1) 00:12:24.260 34.133 - 34.377: 99.8480% ( 1) 00:12:24.260 34.865 - 35.109: 99.8581% ( 1) 00:12:24.260 35.109 - 35.352: 99.8682% ( 1) 00:12:24.260 38.034 - 38.278: 99.8784% ( 1) 00:12:24.260 38.278 - 38.522: 99.8885% ( 1) 00:12:24.260 41.204 - 41.448: 99.8987% ( 1) 00:12:24.260 41.691 - 41.935: 99.9088% ( 1) 00:12:24.260 41.935 - 42.179: 99.9291% ( 2) 00:12:24.260 42.179 - 42.423: 99.9392% ( 1) 00:12:24.260 42.423 - 42.667: 99.9493% ( 1) 00:12:24.260 42.910 - 43.154: 99.9595% ( 1) 00:12:24.260 43.154 - 43.398: 99.9696% ( 1) 00:12:24.260 43.642 - 43.886: 99.9797% ( 1) 00:12:24.260 51.200 - 51.444: 99.9899% ( 1) 00:12:24.260 65.829 - 66.316: 100.0000% ( 1) 00:12:24.260 00:12:24.260 00:12:24.260 real 0m1.370s 00:12:24.260 user 0m1.103s 00:12:24.260 sys 0m0.218s 00:12:24.260 20:40:19 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.260 20:40:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:24.260 ************************************ 00:12:24.260 END TEST nvme_overhead 00:12:24.260 ************************************ 00:12:24.260 20:40:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:24.260 20:40:19 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:24.260 20:40:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.260 20:40:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.260 ************************************ 00:12:24.260 START TEST nvme_arbitration 00:12:24.260 ************************************ 00:12:24.260 20:40:19 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:28.445 Initializing NVMe Controllers 00:12:28.445 Attached to 0000:00:10.0 00:12:28.445 Attached to 0000:00:11.0 00:12:28.445 Attached to 0000:00:13.0 00:12:28.445 Attached to 0000:00:12.0 00:12:28.445 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:28.445 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:28.445 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:28.445 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:28.445 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:28.445 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:28.445 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:28.445 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:28.445 Initialization complete. Launching workers. 00:12:28.445 Starting thread on core 1 with urgent priority queue 00:12:28.445 Starting thread on core 2 with urgent priority queue 00:12:28.445 Starting thread on core 3 with urgent priority queue 00:12:28.445 Starting thread on core 0 with urgent priority queue 00:12:28.445 QEMU NVMe Ctrl (12340 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:12:28.445 QEMU NVMe Ctrl (12342 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:12:28.445 QEMU NVMe Ctrl (12341 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:12:28.445 QEMU NVMe Ctrl (12342 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:12:28.445 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:12:28.445 QEMU NVMe Ctrl (12342 ) core 3: 448.00 IO/s 223.21 secs/100000 ios 00:12:28.445 ======================================================== 00:12:28.445 00:12:28.445 00:12:28.445 real 0m3.500s 00:12:28.445 user 0m9.363s 00:12:28.445 sys 0m0.222s 00:12:28.445 20:40:22 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.445 ************************************ 00:12:28.445 END TEST nvme_arbitration 00:12:28.445 ************************************ 00:12:28.445 20:40:22 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 20:40:22 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:28.445 20:40:22 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.445 20:40:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.445 20:40:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 ************************************ 00:12:28.445 START TEST nvme_single_aen 00:12:28.445 ************************************ 00:12:28.445 20:40:22 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:28.445 Asynchronous Event Request test 00:12:28.445 Attached to 0000:00:10.0 00:12:28.445 Attached to 0000:00:11.0 00:12:28.445 Attached to 0000:00:13.0 00:12:28.445 Attached to 0000:00:12.0 00:12:28.445 Reset controller to setup AER completions for this process 00:12:28.445 Registering asynchronous event callbacks... 00:12:28.445 Getting orig temperature thresholds of all controllers 00:12:28.445 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.445 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.445 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.445 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.445 Setting all controllers temperature threshold low to trigger AER 00:12:28.445 Waiting for all controllers temperature threshold to be set lower 00:12:28.445 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.445 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:28.445 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.445 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:28.445 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.445 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:28.445 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.445 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:28.445 Waiting for all controllers to trigger AER and reset threshold 00:12:28.445 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.445 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.445 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.445 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.445 Cleaning up... 00:12:28.445 00:12:28.445 real 0m0.351s 00:12:28.445 user 0m0.122s 00:12:28.445 sys 0m0.186s 00:12:28.445 20:40:23 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.445 ************************************ 00:12:28.445 END TEST nvme_single_aen 00:12:28.445 ************************************ 00:12:28.445 20:40:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 20:40:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:28.445 20:40:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.445 20:40:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.445 20:40:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.445 ************************************ 00:12:28.445 START TEST nvme_doorbell_aers 00:12:28.445 ************************************ 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:28.445 20:40:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:28.446 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:28.446 20:40:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:28.703 [2024-11-26 20:40:23.546207] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:12:38.674 Executing: test_write_invalid_db 00:12:38.674 Waiting for AER completion... 00:12:38.674 Failure: test_write_invalid_db 00:12:38.674 00:12:38.674 Executing: test_invalid_db_write_overflow_sq 00:12:38.674 Waiting for AER completion... 00:12:38.674 Failure: test_invalid_db_write_overflow_sq 00:12:38.674 00:12:38.674 Executing: test_invalid_db_write_overflow_cq 00:12:38.674 Waiting for AER completion... 00:12:38.674 Failure: test_invalid_db_write_overflow_cq 00:12:38.674 00:12:38.674 20:40:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:38.674 20:40:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:38.674 [2024-11-26 20:40:33.615296] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:12:48.650 Executing: test_write_invalid_db 00:12:48.650 Waiting for AER completion... 00:12:48.650 Failure: test_write_invalid_db 00:12:48.650 00:12:48.650 Executing: test_invalid_db_write_overflow_sq 00:12:48.650 Waiting for AER completion... 00:12:48.650 Failure: test_invalid_db_write_overflow_sq 00:12:48.650 00:12:48.650 Executing: test_invalid_db_write_overflow_cq 00:12:48.650 Waiting for AER completion... 00:12:48.650 Failure: test_invalid_db_write_overflow_cq 00:12:48.650 00:12:48.650 20:40:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:48.650 20:40:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:48.909 [2024-11-26 20:40:43.706147] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:12:58.878 Executing: test_write_invalid_db 00:12:58.878 Waiting for AER completion... 00:12:58.878 Failure: test_write_invalid_db 00:12:58.878 00:12:58.878 Executing: test_invalid_db_write_overflow_sq 00:12:58.878 Waiting for AER completion... 00:12:58.878 Failure: test_invalid_db_write_overflow_sq 00:12:58.878 00:12:58.878 Executing: test_invalid_db_write_overflow_cq 00:12:58.878 Waiting for AER completion... 00:12:58.878 Failure: test_invalid_db_write_overflow_cq 00:12:58.878 00:12:58.878 20:40:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:58.878 20:40:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:58.878 [2024-11-26 20:40:53.687058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 Executing: test_write_invalid_db 00:13:08.850 Waiting for AER completion... 00:13:08.850 Failure: test_write_invalid_db 00:13:08.850 00:13:08.850 Executing: test_invalid_db_write_overflow_sq 00:13:08.850 Waiting for AER completion... 00:13:08.850 Failure: test_invalid_db_write_overflow_sq 00:13:08.850 00:13:08.850 Executing: test_invalid_db_write_overflow_cq 00:13:08.850 Waiting for AER completion... 00:13:08.850 Failure: test_invalid_db_write_overflow_cq 00:13:08.850 00:13:08.850 ************************************ 00:13:08.850 END TEST nvme_doorbell_aers 00:13:08.850 ************************************ 00:13:08.850 00:13:08.850 real 0m40.304s 00:13:08.850 user 0m28.550s 00:13:08.850 sys 0m11.346s 00:13:08.850 20:41:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.850 20:41:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:08.850 20:41:03 nvme -- nvme/nvme.sh@97 -- # uname 00:13:08.850 20:41:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:08.850 20:41:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:08.850 20:41:03 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:08.850 20:41:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.850 20:41:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.850 ************************************ 00:13:08.850 START TEST nvme_multi_aen 00:13:08.850 ************************************ 00:13:08.850 20:41:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:08.850 [2024-11-26 20:41:03.823326] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.823464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.823487] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.826171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.826395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.826424] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.828208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.828417] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.828600] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.830664] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.830880] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:08.850 [2024-11-26 20:41:03.831057] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65175) is not found. Dropping the request. 00:13:09.108 Child process pid: 65691 00:13:09.366 [Child] Asynchronous Event Request test 00:13:09.366 [Child] Attached to 0000:00:10.0 00:13:09.366 [Child] Attached to 0000:00:11.0 00:13:09.366 [Child] Attached to 0000:00:13.0 00:13:09.366 [Child] Attached to 0000:00:12.0 00:13:09.366 [Child] Registering asynchronous event callbacks... 00:13:09.366 [Child] Getting orig temperature thresholds of all controllers 00:13:09.366 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:09.366 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 [Child] Cleaning up... 00:13:09.366 Asynchronous Event Request test 00:13:09.366 Attached to 0000:00:10.0 00:13:09.366 Attached to 0000:00:11.0 00:13:09.366 Attached to 0000:00:13.0 00:13:09.366 Attached to 0000:00:12.0 00:13:09.366 Reset controller to setup AER completions for this process 00:13:09.366 Registering asynchronous event callbacks... 00:13:09.366 Getting orig temperature thresholds of all controllers 00:13:09.366 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.366 Setting all controllers temperature threshold low to trigger AER 00:13:09.366 Waiting for all controllers temperature threshold to be set lower 00:13:09.366 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:09.366 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:09.366 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:09.366 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.366 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:09.366 Waiting for all controllers to trigger AER and reset threshold 00:13:09.366 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.366 Cleaning up... 00:13:09.366 ************************************ 00:13:09.366 END TEST nvme_multi_aen 00:13:09.366 ************************************ 00:13:09.366 00:13:09.366 real 0m0.810s 00:13:09.366 user 0m0.334s 00:13:09.366 sys 0m0.364s 00:13:09.366 20:41:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.366 20:41:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:09.366 20:41:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:09.366 20:41:04 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.366 20:41:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.366 20:41:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.366 ************************************ 00:13:09.366 START TEST nvme_startup 00:13:09.366 ************************************ 00:13:09.366 20:41:04 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:09.934 Initializing NVMe Controllers 00:13:09.934 Attached to 0000:00:10.0 00:13:09.934 Attached to 0000:00:11.0 00:13:09.934 Attached to 0000:00:13.0 00:13:09.934 Attached to 0000:00:12.0 00:13:09.934 Initialization complete. 00:13:09.934 Time used:209626.906 (us). 00:13:09.934 00:13:09.934 real 0m0.313s 00:13:09.934 user 0m0.105s 00:13:09.934 sys 0m0.161s 00:13:09.934 20:41:04 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.934 ************************************ 00:13:09.934 END TEST nvme_startup 00:13:09.934 ************************************ 00:13:09.934 20:41:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:09.934 20:41:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:09.934 20:41:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.934 20:41:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.934 20:41:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.934 ************************************ 00:13:09.934 START TEST nvme_multi_secondary 00:13:09.934 ************************************ 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65748 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65749 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:09.934 20:41:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:14.140 Initializing NVMe Controllers 00:13:14.140 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:14.140 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:14.140 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:14.140 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:14.140 Initialization complete. Launching workers. 00:13:14.140 ======================================================== 00:13:14.140 Latency(us) 00:13:14.140 Device Information : IOPS MiB/s Average min max 00:13:14.140 PCIE (0000:00:10.0) NSID 1 from core 1: 4733.52 18.49 3378.17 1535.96 7993.00 00:13:14.140 PCIE (0000:00:11.0) NSID 1 from core 1: 4733.52 18.49 3379.70 1508.38 7923.30 00:13:14.140 PCIE (0000:00:13.0) NSID 1 from core 1: 4733.52 18.49 3379.65 1360.30 8811.05 00:13:14.140 PCIE (0000:00:12.0) NSID 1 from core 1: 4733.52 18.49 3379.61 1151.85 9184.69 00:13:14.140 PCIE (0000:00:12.0) NSID 2 from core 1: 4733.52 18.49 3379.76 1349.81 9057.32 00:13:14.140 PCIE (0000:00:12.0) NSID 3 from core 1: 4733.52 18.49 3379.71 1508.70 8290.26 00:13:14.140 ======================================================== 00:13:14.140 Total : 28401.09 110.94 3379.43 1151.85 9184.69 00:13:14.140 00:13:14.140 Initializing NVMe Controllers 00:13:14.140 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:14.140 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:14.140 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:14.140 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:14.140 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:14.140 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:14.140 Initialization complete. Launching workers. 00:13:14.140 ======================================================== 00:13:14.140 Latency(us) 00:13:14.140 Device Information : IOPS MiB/s Average min max 00:13:14.140 PCIE (0000:00:10.0) NSID 1 from core 2: 2052.38 8.02 7793.17 1566.53 16979.40 00:13:14.140 PCIE (0000:00:11.0) NSID 1 from core 2: 2052.38 8.02 7795.54 1839.04 18866.30 00:13:14.140 PCIE (0000:00:13.0) NSID 1 from core 2: 2052.38 8.02 7795.43 1355.55 14629.83 00:13:14.140 PCIE (0000:00:12.0) NSID 1 from core 2: 2052.38 8.02 7795.56 1316.74 14715.43 00:13:14.140 PCIE (0000:00:12.0) NSID 2 from core 2: 2052.38 8.02 7796.85 1325.48 14715.11 00:13:14.140 PCIE (0000:00:12.0) NSID 3 from core 2: 2052.38 8.02 7796.88 1418.80 14822.64 00:13:14.140 ======================================================== 00:13:14.140 Total : 12314.26 48.10 7795.57 1316.74 18866.30 00:13:14.140 00:13:14.140 20:41:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65748 00:13:15.092 Initializing NVMe Controllers 00:13:15.092 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:15.092 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:15.092 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:15.092 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:15.092 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:15.092 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:15.092 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:15.092 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:15.092 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:15.092 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:15.092 Initialization complete. Launching workers. 00:13:15.092 ======================================================== 00:13:15.092 Latency(us) 00:13:15.092 Device Information : IOPS MiB/s Average min max 00:13:15.092 PCIE (0000:00:10.0) NSID 1 from core 0: 7102.63 27.74 2250.88 1009.29 7940.99 00:13:15.092 PCIE (0000:00:11.0) NSID 1 from core 0: 7102.63 27.74 2252.13 1019.11 7850.90 00:13:15.092 PCIE (0000:00:13.0) NSID 1 from core 0: 7102.63 27.74 2252.08 1021.95 7957.90 00:13:15.092 PCIE (0000:00:12.0) NSID 1 from core 0: 7102.63 27.74 2252.03 1021.92 8519.42 00:13:15.092 PCIE (0000:00:12.0) NSID 2 from core 0: 7102.63 27.74 2251.98 1033.37 7581.55 00:13:15.092 PCIE (0000:00:12.0) NSID 3 from core 0: 7102.63 27.74 2251.94 1034.19 7577.18 00:13:15.092 ======================================================== 00:13:15.092 Total : 42615.80 166.47 2251.84 1009.29 8519.42 00:13:15.092 00:13:15.349 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65749 00:13:15.349 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65817 00:13:15.349 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:15.349 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:15.350 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65818 00:13:15.350 20:41:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:18.628 Initializing NVMe Controllers 00:13:18.628 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:18.628 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:18.628 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:18.628 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:18.628 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:18.628 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:18.628 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:18.628 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:18.628 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:18.628 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:18.628 Initialization complete. Launching workers. 00:13:18.628 ======================================================== 00:13:18.628 Latency(us) 00:13:18.628 Device Information : IOPS MiB/s Average min max 00:13:18.628 PCIE (0000:00:10.0) NSID 1 from core 1: 4368.78 17.07 3660.42 1641.97 10115.16 00:13:18.628 PCIE (0000:00:11.0) NSID 1 from core 1: 4368.78 17.07 3661.85 1653.34 9006.79 00:13:18.628 PCIE (0000:00:13.0) NSID 1 from core 1: 4368.78 17.07 3661.94 1365.99 8417.89 00:13:18.628 PCIE (0000:00:12.0) NSID 1 from core 1: 4368.78 17.07 3661.94 1636.25 7869.61 00:13:18.628 PCIE (0000:00:12.0) NSID 2 from core 1: 4368.78 17.07 3662.23 1619.18 8971.26 00:13:18.628 PCIE (0000:00:12.0) NSID 3 from core 1: 4368.78 17.07 3662.24 1662.49 9726.29 00:13:18.628 ======================================================== 00:13:18.629 Total : 26212.68 102.39 3661.77 1365.99 10115.16 00:13:18.629 00:13:18.884 Initializing NVMe Controllers 00:13:18.884 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:18.884 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:18.884 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:18.884 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:18.884 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:18.884 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:18.884 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:18.884 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:18.884 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:18.884 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:18.884 Initialization complete. Launching workers. 00:13:18.884 ======================================================== 00:13:18.884 Latency(us) 00:13:18.884 Device Information : IOPS MiB/s Average min max 00:13:18.884 PCIE (0000:00:10.0) NSID 1 from core 0: 4766.41 18.62 3354.97 1075.94 9409.55 00:13:18.884 PCIE (0000:00:11.0) NSID 1 from core 0: 4766.41 18.62 3356.51 1104.55 8037.48 00:13:18.884 PCIE (0000:00:13.0) NSID 1 from core 0: 4766.41 18.62 3356.62 1254.63 8527.78 00:13:18.884 PCIE (0000:00:12.0) NSID 1 from core 0: 4766.41 18.62 3356.63 1259.13 8676.76 00:13:18.884 PCIE (0000:00:12.0) NSID 2 from core 0: 4766.41 18.62 3356.62 1092.40 8603.02 00:13:18.884 PCIE (0000:00:12.0) NSID 3 from core 0: 4766.41 18.62 3356.58 1096.18 8085.04 00:13:18.884 ======================================================== 00:13:18.884 Total : 28598.44 111.71 3356.32 1075.94 9409.55 00:13:18.884 00:13:20.824 Initializing NVMe Controllers 00:13:20.824 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:20.824 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:20.824 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:20.824 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:20.824 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:20.824 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:20.824 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:20.824 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:20.824 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:20.824 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:20.824 Initialization complete. Launching workers. 00:13:20.824 ======================================================== 00:13:20.824 Latency(us) 00:13:20.824 Device Information : IOPS MiB/s Average min max 00:13:20.824 PCIE (0000:00:10.0) NSID 1 from core 2: 3165.87 12.37 5050.83 1088.80 17637.65 00:13:20.824 PCIE (0000:00:11.0) NSID 1 from core 2: 3165.87 12.37 5049.29 1054.35 17097.32 00:13:20.824 PCIE (0000:00:13.0) NSID 1 from core 2: 3165.87 12.37 5048.93 1093.70 15604.41 00:13:20.824 PCIE (0000:00:12.0) NSID 1 from core 2: 3165.87 12.37 5048.79 1080.89 20543.31 00:13:20.824 PCIE (0000:00:12.0) NSID 2 from core 2: 3165.87 12.37 5048.94 1041.08 20728.32 00:13:20.825 PCIE (0000:00:12.0) NSID 3 from core 2: 3165.87 12.37 5048.83 897.33 16454.86 00:13:20.825 ======================================================== 00:13:20.825 Total : 18995.24 74.20 5049.27 897.33 20728.32 00:13:20.825 00:13:20.825 ************************************ 00:13:20.825 END TEST nvme_multi_secondary 00:13:20.825 ************************************ 00:13:20.825 20:41:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65817 00:13:20.825 20:41:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65818 00:13:20.825 00:13:20.825 real 0m10.862s 00:13:20.825 user 0m18.757s 00:13:20.825 sys 0m1.242s 00:13:20.825 20:41:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.825 20:41:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:20.825 20:41:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:20.825 20:41:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:20.825 20:41:15 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64736 ]] 00:13:20.825 20:41:15 nvme -- common/autotest_common.sh@1094 -- # kill 64736 00:13:20.825 20:41:15 nvme -- common/autotest_common.sh@1095 -- # wait 64736 00:13:20.825 [2024-11-26 20:41:15.632864] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.632967] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.633013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.633041] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.636883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.636964] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.636988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.637015] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.640391] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.640459] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.640478] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.640499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.643583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.643845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.643871] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:20.825 [2024-11-26 20:41:15.643892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65690) is not found. Dropping the request. 00:13:21.083 20:41:15 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:21.083 20:41:15 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:21.083 20:41:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.083 20:41:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.083 20:41:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.083 20:41:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.083 ************************************ 00:13:21.083 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:21.083 ************************************ 00:13:21.083 20:41:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.083 * Looking for test storage... 00:13:21.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:21.083 20:41:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.083 20:41:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.083 20:41:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.083 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.342 --rc genhtml_branch_coverage=1 00:13:21.342 --rc genhtml_function_coverage=1 00:13:21.342 --rc genhtml_legend=1 00:13:21.342 --rc geninfo_all_blocks=1 00:13:21.342 --rc geninfo_unexecuted_blocks=1 00:13:21.342 00:13:21.342 ' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.342 --rc genhtml_branch_coverage=1 00:13:21.342 --rc genhtml_function_coverage=1 00:13:21.342 --rc genhtml_legend=1 00:13:21.342 --rc geninfo_all_blocks=1 00:13:21.342 --rc geninfo_unexecuted_blocks=1 00:13:21.342 00:13:21.342 ' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.342 --rc genhtml_branch_coverage=1 00:13:21.342 --rc genhtml_function_coverage=1 00:13:21.342 --rc genhtml_legend=1 00:13:21.342 --rc geninfo_all_blocks=1 00:13:21.342 --rc geninfo_unexecuted_blocks=1 00:13:21.342 00:13:21.342 ' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.342 --rc genhtml_branch_coverage=1 00:13:21.342 --rc genhtml_function_coverage=1 00:13:21.342 --rc genhtml_legend=1 00:13:21.342 --rc geninfo_all_blocks=1 00:13:21.342 --rc geninfo_unexecuted_blocks=1 00:13:21.342 00:13:21.342 ' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:21.342 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:21.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65984 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65984 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65984 ']' 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.343 20:41:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:21.601 [2024-11-26 20:41:16.339476] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:13:21.601 [2024-11-26 20:41:16.339681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65984 ] 00:13:21.601 [2024-11-26 20:41:16.573921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.859 [2024-11-26 20:41:16.749649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.859 [2024-11-26 20:41:16.749811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.859 [2024-11-26 20:41:16.749994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.859 [2024-11-26 20:41:16.750564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.801 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.801 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:22.801 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:22.801 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.801 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.058 nvme0n1 00:13:23.058 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_poGzU.txt 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.059 true 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732653677 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66013 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:23.059 20:41:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:24.965 [2024-11-26 20:41:19.839749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:24.965 [2024-11-26 20:41:19.840190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:24.965 [2024-11-26 20:41:19.840243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.965 [2024-11-26 20:41:19.840264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.965 [2024-11-26 20:41:19.842756] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:24.965 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66013 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66013 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66013 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_poGzU.txt 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:24.965 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:24.966 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:24.966 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_poGzU.txt 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65984 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65984 ']' 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65984 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.224 20:41:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65984 00:13:25.224 killing process with pid 65984 00:13:25.224 20:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.224 20:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.224 20:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65984' 00:13:25.224 20:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65984 00:13:25.224 20:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65984 00:13:28.506 20:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:28.506 20:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:28.506 ************************************ 00:13:28.506 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:28.506 ************************************ 00:13:28.506 00:13:28.506 real 0m7.192s 00:13:28.506 user 0m25.042s 00:13:28.506 sys 0m0.855s 00:13:28.506 20:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.506 20:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:28.506 20:41:23 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:28.506 20:41:23 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:28.506 20:41:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.506 20:41:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.506 20:41:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.506 ************************************ 00:13:28.506 START TEST nvme_fio 00:13:28.506 ************************************ 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:28.506 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:28.506 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:28.772 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:28.772 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:29.039 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:29.039 20:41:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:29.039 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:29.039 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:29.040 20:41:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:29.297 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:29.297 fio-3.35 00:13:29.297 Starting 1 thread 00:13:32.600 00:13:32.600 test: (groupid=0, jobs=1): err= 0: pid=66178: Tue Nov 26 20:41:27 2024 00:13:32.600 read: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(124MiB/2001msec) 00:13:32.600 slat (usec): min=4, max=453, avg= 6.52, stdev= 3.99 00:13:32.600 clat (usec): min=297, max=9410, avg=4007.23, stdev=910.38 00:13:32.600 lat (usec): min=303, max=9417, avg=4013.75, stdev=911.63 00:13:32.600 clat percentiles (usec): 00:13:32.600 | 1.00th=[ 2900], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3163], 00:13:32.600 | 30.00th=[ 3261], 40.00th=[ 3752], 50.00th=[ 3982], 60.00th=[ 4146], 00:13:32.600 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5669], 00:13:32.600 | 99.00th=[ 7111], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9110], 00:13:32.600 | 99.99th=[ 9241] 00:13:32.600 bw ( KiB/s): min=59560, max=76592, per=100.00%, avg=66946.67, stdev=8737.76, samples=3 00:13:32.600 iops : min=14890, max=19148, avg=16736.67, stdev=2184.44, samples=3 00:13:32.600 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(124MiB/2001msec); 0 zone resets 00:13:32.600 slat (usec): min=4, max=455, avg= 6.75, stdev= 4.66 00:13:32.600 clat (usec): min=345, max=9363, avg=4017.70, stdev=914.34 00:13:32.600 lat (usec): min=353, max=9371, avg=4024.44, stdev=915.65 00:13:32.600 clat percentiles (usec): 00:13:32.600 | 1.00th=[ 2868], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3163], 00:13:32.600 | 30.00th=[ 3261], 40.00th=[ 3785], 50.00th=[ 3982], 60.00th=[ 4178], 00:13:32.600 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5014], 95.00th=[ 5735], 00:13:32.600 | 99.00th=[ 7046], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 8979], 00:13:32.600 | 99.99th=[ 9241] 00:13:32.600 bw ( KiB/s): min=59872, max=76296, per=100.00%, avg=66912.00, stdev=8459.18, samples=3 00:13:32.600 iops : min=14968, max=19074, avg=16728.00, stdev=2114.79, samples=3 00:13:32.600 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:32.601 lat (msec) : 2=0.18%, 4=51.11%, 10=48.67% 00:13:32.601 cpu : usr=98.10%, sys=0.45%, ctx=27, majf=0, minf=606 00:13:32.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:32.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.601 issued rwts: total=31752,31788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.601 00:13:32.601 Run status group 0 (all jobs): 00:13:32.601 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=124MiB (130MB), run=2001-2001msec 00:13:32.601 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=124MiB (130MB), run=2001-2001msec 00:13:32.859 ----------------------------------------------------- 00:13:32.859 Suppressions used: 00:13:32.859 count bytes template 00:13:32.859 1 32 /usr/src/fio/parse.c 00:13:32.859 1 8 libtcmalloc_minimal.so 00:13:32.859 ----------------------------------------------------- 00:13:32.859 00:13:32.859 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:32.859 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:32.859 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:32.859 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:33.119 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:33.119 20:41:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:33.415 20:41:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:33.415 20:41:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.415 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.673 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.673 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.673 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:33.673 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:33.673 20:41:28 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.673 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:33.673 fio-3.35 00:13:33.673 Starting 1 thread 00:13:36.959 00:13:36.959 test: (groupid=0, jobs=1): err= 0: pid=66244: Tue Nov 26 20:41:31 2024 00:13:36.959 read: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(114MiB/2001msec) 00:13:36.959 slat (usec): min=4, max=453, avg= 7.14, stdev= 3.45 00:13:36.959 clat (usec): min=315, max=11753, avg=4338.42, stdev=843.53 00:13:36.959 lat (usec): min=322, max=11760, avg=4345.56, stdev=844.29 00:13:36.959 clat percentiles (usec): 00:13:36.959 | 1.00th=[ 3261], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3720], 00:13:36.959 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:13:36.959 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5211], 95.00th=[ 6128], 00:13:36.959 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[10290], 00:13:36.959 | 99.99th=[11469] 00:13:36.959 bw ( KiB/s): min=58536, max=62240, per=100.00%, avg=60138.67, stdev=1901.68, samples=3 00:13:36.959 iops : min=14634, max=15560, avg=15034.67, stdev=475.42, samples=3 00:13:36.959 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:13:36.959 slat (usec): min=5, max=456, avg= 7.49, stdev= 4.31 00:13:36.959 clat (usec): min=342, max=12312, avg=4359.09, stdev=895.15 00:13:36.959 lat (usec): min=349, max=12334, avg=4366.58, stdev=895.91 00:13:36.959 clat percentiles (usec): 00:13:36.959 | 1.00th=[ 3261], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3752], 00:13:36.959 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4359], 00:13:36.959 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5211], 95.00th=[ 6128], 00:13:36.959 | 99.00th=[ 7767], 99.50th=[ 8848], 99.90th=[11207], 99.95th=[11338], 00:13:36.959 | 99.99th=[11994] 00:13:36.959 bw ( KiB/s): min=58832, max=61736, per=100.00%, avg=59840.00, stdev=1643.08, samples=3 00:13:36.959 iops : min=14708, max=15434, avg=14960.00, stdev=410.77, samples=3 00:13:36.959 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:36.959 lat (msec) : 2=0.05%, 4=32.28%, 10=67.43%, 20=0.21% 00:13:36.959 cpu : usr=97.80%, sys=0.30%, ctx=27, majf=0, minf=606 00:13:36.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:36.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.959 issued rwts: total=29302,29349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.959 00:13:36.959 Run status group 0 (all jobs): 00:13:36.959 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=114MiB (120MB), run=2001-2001msec 00:13:36.959 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:13:37.218 ----------------------------------------------------- 00:13:37.218 Suppressions used: 00:13:37.218 count bytes template 00:13:37.218 1 32 /usr/src/fio/parse.c 00:13:37.219 1 8 libtcmalloc_minimal.so 00:13:37.219 ----------------------------------------------------- 00:13:37.219 00:13:37.219 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:37.219 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:37.219 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:37.219 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.477 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.477 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:37.736 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:37.736 20:41:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:37.736 20:41:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.994 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:37.994 fio-3.35 00:13:37.994 Starting 1 thread 00:13:43.262 00:13:43.262 test: (groupid=0, jobs=1): err= 0: pid=66310: Tue Nov 26 20:41:37 2024 00:13:43.262 read: IOPS=14.8k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec) 00:13:43.262 slat (usec): min=4, max=387, avg= 6.76, stdev= 3.33 00:13:43.262 clat (usec): min=221, max=10354, avg=4286.64, stdev=713.82 00:13:43.262 lat (usec): min=226, max=10360, avg=4293.40, stdev=714.57 00:13:43.262 clat percentiles (usec): 00:13:43.262 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3785], 00:13:43.262 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:13:43.262 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5407], 00:13:43.262 | 99.00th=[ 6456], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 8848], 00:13:43.262 | 99.99th=[10159] 00:13:43.262 bw ( KiB/s): min=56080, max=59640, per=96.94%, avg=57573.33, stdev=1847.95, samples=3 00:13:43.262 iops : min=14020, max=14910, avg=14393.33, stdev=461.99, samples=3 00:13:43.262 write: IOPS=14.9k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec); 0 zone resets 00:13:43.262 slat (usec): min=4, max=353, avg= 6.94, stdev= 4.05 00:13:43.262 clat (usec): min=320, max=10238, avg=4296.06, stdev=707.57 00:13:43.262 lat (usec): min=327, max=10244, avg=4303.00, stdev=708.31 00:13:43.262 clat percentiles (usec): 00:13:43.262 | 1.00th=[ 2671], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3818], 00:13:43.262 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:13:43.262 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5407], 00:13:43.262 | 99.00th=[ 6390], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 8979], 00:13:43.262 | 99.99th=[10028] 00:13:43.262 bw ( KiB/s): min=56360, max=58904, per=96.74%, avg=57472.00, stdev=1301.84, samples=3 00:13:43.262 iops : min=14090, max=14726, avg=14368.00, stdev=325.46, samples=3 00:13:43.262 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:13:43.262 lat (msec) : 2=0.30%, 4=24.09%, 10=75.55%, 20=0.01% 00:13:43.262 cpu : usr=98.20%, sys=0.40%, ctx=25, majf=0, minf=606 00:13:43.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:43.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.262 issued rwts: total=29711,29719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.262 00:13:43.262 Run status group 0 (all jobs): 00:13:43.262 READ: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:43.262 WRITE: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:43.262 ----------------------------------------------------- 00:13:43.262 Suppressions used: 00:13:43.262 count bytes template 00:13:43.262 1 32 /usr/src/fio/parse.c 00:13:43.262 1 8 libtcmalloc_minimal.so 00:13:43.262 ----------------------------------------------------- 00:13:43.262 00:13:43.262 20:41:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:43.262 20:41:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:43.262 20:41:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:43.262 20:41:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:43.262 20:41:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:43.262 20:41:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:43.829 20:41:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:43.829 20:41:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:43.829 20:41:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:43.829 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:43.829 fio-3.35 00:13:43.829 Starting 1 thread 00:13:48.040 00:13:48.040 test: (groupid=0, jobs=1): err= 0: pid=66378: Tue Nov 26 20:41:42 2024 00:13:48.040 read: IOPS=16.7k, BW=65.0MiB/s (68.2MB/s)(130MiB/2001msec) 00:13:48.040 slat (usec): min=4, max=379, avg= 6.28, stdev= 3.02 00:13:48.040 clat (usec): min=338, max=8511, avg=3822.85, stdev=612.24 00:13:48.040 lat (usec): min=344, max=8565, avg=3829.13, stdev=613.05 00:13:48.040 clat percentiles (usec): 00:13:48.040 | 1.00th=[ 2835], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3195], 00:13:48.040 | 30.00th=[ 3326], 40.00th=[ 3687], 50.00th=[ 3884], 60.00th=[ 4015], 00:13:48.040 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4752], 00:13:48.040 | 99.00th=[ 5276], 99.50th=[ 5669], 99.90th=[ 7308], 99.95th=[ 7701], 00:13:48.040 | 99.99th=[ 8356] 00:13:48.040 bw ( KiB/s): min=58832, max=77656, per=100.00%, avg=66976.00, stdev=9664.84, samples=3 00:13:48.040 iops : min=14708, max=19414, avg=16744.00, stdev=2416.21, samples=3 00:13:48.040 write: IOPS=16.7k, BW=65.2MiB/s (68.3MB/s)(130MiB/2001msec); 0 zone resets 00:13:48.040 slat (usec): min=4, max=193, avg= 6.57, stdev= 2.10 00:13:48.040 clat (usec): min=229, max=8417, avg=3825.32, stdev=613.29 00:13:48.040 lat (usec): min=235, max=8429, avg=3831.89, stdev=614.14 00:13:48.040 clat percentiles (usec): 00:13:48.040 | 1.00th=[ 2802], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3195], 00:13:48.040 | 30.00th=[ 3326], 40.00th=[ 3687], 50.00th=[ 3884], 60.00th=[ 4015], 00:13:48.040 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4752], 00:13:48.040 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7439], 99.95th=[ 7701], 00:13:48.040 | 99.99th=[ 8225] 00:13:48.040 bw ( KiB/s): min=58456, max=77504, per=100.00%, avg=66944.00, stdev=9691.57, samples=3 00:13:48.040 iops : min=14614, max=19376, avg=16736.00, stdev=2422.89, samples=3 00:13:48.040 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:48.040 lat (msec) : 2=0.22%, 4=59.24%, 10=40.50% 00:13:48.040 cpu : usr=98.45%, sys=0.30%, ctx=27, majf=0, minf=604 00:13:48.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:48.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.040 issued rwts: total=33317,33384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.040 00:13:48.040 Run status group 0 (all jobs): 00:13:48.040 READ: bw=65.0MiB/s (68.2MB/s), 65.0MiB/s-65.0MiB/s (68.2MB/s-68.2MB/s), io=130MiB (136MB), run=2001-2001msec 00:13:48.040 WRITE: bw=65.2MiB/s (68.3MB/s), 65.2MiB/s-65.2MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:13:48.299 ----------------------------------------------------- 00:13:48.299 Suppressions used: 00:13:48.299 count bytes template 00:13:48.299 1 32 /usr/src/fio/parse.c 00:13:48.299 1 8 libtcmalloc_minimal.so 00:13:48.299 ----------------------------------------------------- 00:13:48.299 00:13:48.557 ************************************ 00:13:48.557 END TEST nvme_fio 00:13:48.557 ************************************ 00:13:48.557 20:41:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:48.557 20:41:43 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:48.557 00:13:48.557 real 0m20.163s 00:13:48.557 user 0m14.870s 00:13:48.557 sys 0m5.935s 00:13:48.557 20:41:43 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.557 20:41:43 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 ************************************ 00:13:48.557 END TEST nvme 00:13:48.557 ************************************ 00:13:48.557 00:13:48.557 real 1m38.266s 00:13:48.557 user 3m48.303s 00:13:48.557 sys 0m26.900s 00:13:48.557 20:41:43 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.557 20:41:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 20:41:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:48.557 20:41:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:48.557 20:41:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.557 20:41:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.557 20:41:43 -- common/autotest_common.sh@10 -- # set +x 00:13:48.557 ************************************ 00:13:48.557 START TEST nvme_scc 00:13:48.557 ************************************ 00:13:48.557 20:41:43 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:48.557 * Looking for test storage... 00:13:48.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:48.557 20:41:43 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.557 20:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.557 20:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.816 --rc genhtml_branch_coverage=1 00:13:48.816 --rc genhtml_function_coverage=1 00:13:48.816 --rc genhtml_legend=1 00:13:48.816 --rc geninfo_all_blocks=1 00:13:48.816 --rc geninfo_unexecuted_blocks=1 00:13:48.816 00:13:48.816 ' 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.816 --rc genhtml_branch_coverage=1 00:13:48.816 --rc genhtml_function_coverage=1 00:13:48.816 --rc genhtml_legend=1 00:13:48.816 --rc geninfo_all_blocks=1 00:13:48.816 --rc geninfo_unexecuted_blocks=1 00:13:48.816 00:13:48.816 ' 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.816 --rc genhtml_branch_coverage=1 00:13:48.816 --rc genhtml_function_coverage=1 00:13:48.816 --rc genhtml_legend=1 00:13:48.816 --rc geninfo_all_blocks=1 00:13:48.816 --rc geninfo_unexecuted_blocks=1 00:13:48.816 00:13:48.816 ' 00:13:48.816 20:41:43 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.816 --rc genhtml_branch_coverage=1 00:13:48.816 --rc genhtml_function_coverage=1 00:13:48.816 --rc genhtml_legend=1 00:13:48.816 --rc geninfo_all_blocks=1 00:13:48.816 --rc geninfo_unexecuted_blocks=1 00:13:48.816 00:13:48.816 ' 00:13:48.816 20:41:43 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.816 20:41:43 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.816 20:41:43 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:48.816 20:41:43 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:48.816 20:41:43 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.816 20:41:43 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.816 20:41:43 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 20:41:43 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 20:41:43 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 20:41:43 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:48.817 20:41:43 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:48.817 20:41:43 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:48.817 20:41:43 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.817 20:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:48.817 20:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:48.817 20:41:43 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:48.817 20:41:43 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:49.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.393 Waiting for block devices as requested 00:13:49.651 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.651 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.651 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:55.181 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:55.181 20:41:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:55.181 20:41:49 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.181 20:41:49 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:55.181 20:41:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.181 20:41:49 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:55.181 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.182 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.183 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.184 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:55.185 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.186 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.187 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:55.188 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.189 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:55.190 20:41:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.190 20:41:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:55.190 20:41:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.190 20:41:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:55.190 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.191 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.192 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.193 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.194 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.195 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.196 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.197 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:55.198 20:41:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.198 20:41:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:55.198 20:41:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.198 20:41:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:55.198 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:55.458 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:55.459 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.460 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:55.461 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.462 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.463 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:55.464 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.724 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.725 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:55.726 20:41:50 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.726 20:41:50 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:55.726 20:41:50 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.726 20:41:50 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.726 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.727 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:55.728 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.729 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:55.730 20:41:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:55.730 20:41:50 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:55.731 20:41:50 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:55.731 20:41:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:55.731 20:41:50 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:55.731 20:41:50 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:56.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.908 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.908 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.165 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.165 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:57.165 20:41:52 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:57.165 20:41:52 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:57.165 20:41:52 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.165 20:41:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:57.165 ************************************ 00:13:57.165 START TEST nvme_simple_copy 00:13:57.165 ************************************ 00:13:57.165 20:41:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:57.422 Initializing NVMe Controllers 00:13:57.422 Attaching to 0000:00:10.0 00:13:57.422 Controller supports SCC. Attached to 0000:00:10.0 00:13:57.422 Namespace ID: 1 size: 6GB 00:13:57.422 Initialization complete. 00:13:57.422 00:13:57.422 Controller QEMU NVMe Ctrl (12340 ) 00:13:57.422 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:57.422 Namespace Block Size:4096 00:13:57.422 Writing LBAs 0 to 63 with Random Data 00:13:57.422 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:57.422 LBAs matching Written Data: 64 00:13:57.422 00:13:57.422 real 0m0.336s 00:13:57.422 user 0m0.133s 00:13:57.422 sys 0m0.102s 00:13:57.422 20:41:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.422 20:41:52 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:57.422 ************************************ 00:13:57.422 END TEST nvme_simple_copy 00:13:57.422 ************************************ 00:13:57.422 00:13:57.423 real 0m8.962s 00:13:57.423 user 0m1.788s 00:13:57.423 sys 0m2.195s 00:13:57.423 20:41:52 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.423 20:41:52 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:57.423 ************************************ 00:13:57.423 END TEST nvme_scc 00:13:57.423 ************************************ 00:13:57.423 20:41:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:57.423 20:41:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:57.423 20:41:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:57.423 20:41:52 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:57.423 20:41:52 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:57.423 20:41:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.423 20:41:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.423 20:41:52 -- common/autotest_common.sh@10 -- # set +x 00:13:57.680 ************************************ 00:13:57.680 START TEST nvme_fdp 00:13:57.680 ************************************ 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:57.680 * Looking for test storage... 00:13:57.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.680 20:41:52 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.680 20:41:52 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.681 --rc genhtml_branch_coverage=1 00:13:57.681 --rc genhtml_function_coverage=1 00:13:57.681 --rc genhtml_legend=1 00:13:57.681 --rc geninfo_all_blocks=1 00:13:57.681 --rc geninfo_unexecuted_blocks=1 00:13:57.681 00:13:57.681 ' 00:13:57.681 20:41:52 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.681 --rc genhtml_branch_coverage=1 00:13:57.681 --rc genhtml_function_coverage=1 00:13:57.681 --rc genhtml_legend=1 00:13:57.681 --rc geninfo_all_blocks=1 00:13:57.681 --rc geninfo_unexecuted_blocks=1 00:13:57.681 00:13:57.681 ' 00:13:57.681 20:41:52 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.681 --rc genhtml_branch_coverage=1 00:13:57.681 --rc genhtml_function_coverage=1 00:13:57.681 --rc genhtml_legend=1 00:13:57.681 --rc geninfo_all_blocks=1 00:13:57.681 --rc geninfo_unexecuted_blocks=1 00:13:57.681 00:13:57.681 ' 00:13:57.681 20:41:52 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.681 --rc genhtml_branch_coverage=1 00:13:57.681 --rc genhtml_function_coverage=1 00:13:57.681 --rc genhtml_legend=1 00:13:57.681 --rc geninfo_all_blocks=1 00:13:57.681 --rc geninfo_unexecuted_blocks=1 00:13:57.681 00:13:57.681 ' 00:13:57.681 20:41:52 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.681 20:41:52 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.681 20:41:52 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.681 20:41:52 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.681 20:41:52 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.681 20:41:52 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.681 20:41:52 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.681 20:41:52 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.681 20:41:52 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:57.681 20:41:52 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:57.681 20:41:52 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:57.681 20:41:52 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.681 20:41:52 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:57.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.197 Waiting for block devices as requested 00:13:58.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.197 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.454 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.454 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:03.793 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:03.793 20:41:58 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:03.793 20:41:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:03.793 20:41:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:03.793 20:41:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:03.793 20:41:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.793 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.794 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:03.795 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:14:03.796 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.797 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:03.798 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:03.799 20:41:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:03.799 20:41:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:03.799 20:41:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:03.799 20:41:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.799 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.800 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:03.801 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.802 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.803 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.804 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.805 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:04.069 20:41:58 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.069 20:41:58 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:04.069 20:41:58 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.069 20:41:58 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.069 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.070 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:04.071 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.072 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.073 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.074 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.075 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:14:04.076 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:58 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:04.077 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.078 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.079 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:04.341 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.342 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:04.343 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:04.344 20:41:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.344 20:41:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:04.344 20:41:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.344 20:41:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.344 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:04.345 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.346 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:04.347 20:41:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:14:04.347 20:41:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:14:04.348 20:41:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:04.348 20:41:59 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:04.348 20:41:59 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:04.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.847 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.847 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.847 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.847 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.847 20:42:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:05.847 20:42:00 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:05.847 20:42:00 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.847 20:42:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:05.847 ************************************ 00:14:05.847 START TEST nvme_flexible_data_placement 00:14:05.847 ************************************ 00:14:05.847 20:42:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:06.105 Initializing NVMe Controllers 00:14:06.105 Attaching to 0000:00:13.0 00:14:06.105 Controller supports FDP Attached to 0000:00:13.0 00:14:06.105 Namespace ID: 1 Endurance Group ID: 1 00:14:06.105 Initialization complete. 00:14:06.105 00:14:06.105 ================================== 00:14:06.105 == FDP tests for Namespace: #01 == 00:14:06.105 ================================== 00:14:06.105 00:14:06.105 Get Feature: FDP: 00:14:06.105 ================= 00:14:06.105 Enabled: Yes 00:14:06.105 FDP configuration Index: 0 00:14:06.105 00:14:06.105 FDP configurations log page 00:14:06.105 =========================== 00:14:06.105 Number of FDP configurations: 1 00:14:06.105 Version: 0 00:14:06.105 Size: 112 00:14:06.105 FDP Configuration Descriptor: 0 00:14:06.105 Descriptor Size: 96 00:14:06.105 Reclaim Group Identifier format: 2 00:14:06.105 FDP Volatile Write Cache: Not Present 00:14:06.105 FDP Configuration: Valid 00:14:06.105 Vendor Specific Size: 0 00:14:06.105 Number of Reclaim Groups: 2 00:14:06.105 Number of Recalim Unit Handles: 8 00:14:06.105 Max Placement Identifiers: 128 00:14:06.105 Number of Namespaces Suppprted: 256 00:14:06.105 Reclaim unit Nominal Size: 6000000 bytes 00:14:06.105 Estimated Reclaim Unit Time Limit: Not Reported 00:14:06.105 RUH Desc #000: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #001: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #002: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #003: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #004: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #005: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #006: RUH Type: Initially Isolated 00:14:06.105 RUH Desc #007: RUH Type: Initially Isolated 00:14:06.105 00:14:06.105 FDP reclaim unit handle usage log page 00:14:06.105 ====================================== 00:14:06.105 Number of Reclaim Unit Handles: 8 00:14:06.105 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:06.105 RUH Usage Desc #001: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #002: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #003: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #004: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #005: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #006: RUH Attributes: Unused 00:14:06.105 RUH Usage Desc #007: RUH Attributes: Unused 00:14:06.105 00:14:06.105 FDP statistics log page 00:14:06.105 ======================= 00:14:06.105 Host bytes with metadata written: 781004800 00:14:06.105 Media bytes with metadata written: 781148160 00:14:06.105 Media bytes erased: 0 00:14:06.105 00:14:06.105 FDP Reclaim unit handle status 00:14:06.105 ============================== 00:14:06.105 Number of RUHS descriptors: 2 00:14:06.105 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000172d 00:14:06.105 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:06.105 00:14:06.105 FDP write on placement id: 0 success 00:14:06.105 00:14:06.105 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:06.105 00:14:06.105 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:06.105 00:14:06.105 Get Feature: FDP Events for Placement handle: #0 00:14:06.105 ======================== 00:14:06.105 Number of FDP Events: 6 00:14:06.105 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:06.105 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:06.105 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:06.105 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:06.105 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:06.105 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:06.105 00:14:06.105 FDP events log page 00:14:06.105 =================== 00:14:06.105 Number of FDP events: 1 00:14:06.105 FDP Event #0: 00:14:06.105 Event Type: RU Not Written to Capacity 00:14:06.105 Placement Identifier: Valid 00:14:06.105 NSID: Valid 00:14:06.105 Location: Valid 00:14:06.105 Placement Identifier: 0 00:14:06.105 Event Timestamp: a 00:14:06.105 Namespace Identifier: 1 00:14:06.105 Reclaim Group Identifier: 0 00:14:06.105 Reclaim Unit Handle Identifier: 0 00:14:06.105 00:14:06.105 FDP test passed 00:14:06.362 00:14:06.362 real 0m0.349s 00:14:06.362 user 0m0.117s 00:14:06.362 sys 0m0.129s 00:14:06.362 ************************************ 00:14:06.362 END TEST nvme_flexible_data_placement 00:14:06.362 ************************************ 00:14:06.362 20:42:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.362 20:42:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 00:14:06.362 real 0m8.732s 00:14:06.362 user 0m1.642s 00:14:06.362 sys 0m1.938s 00:14:06.362 20:42:01 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.362 ************************************ 00:14:06.362 END TEST nvme_fdp 00:14:06.362 ************************************ 00:14:06.362 20:42:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 20:42:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:14:06.362 20:42:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:06.362 20:42:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.362 20:42:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.362 20:42:01 -- common/autotest_common.sh@10 -- # set +x 00:14:06.362 ************************************ 00:14:06.362 START TEST nvme_rpc 00:14:06.362 ************************************ 00:14:06.362 20:42:01 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:06.362 * Looking for test storage... 00:14:06.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:06.362 20:42:01 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.362 20:42:01 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.362 20:42:01 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.619 20:42:01 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.619 20:42:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.620 20:42:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.620 --rc genhtml_branch_coverage=1 00:14:06.620 --rc genhtml_function_coverage=1 00:14:06.620 --rc genhtml_legend=1 00:14:06.620 --rc geninfo_all_blocks=1 00:14:06.620 --rc geninfo_unexecuted_blocks=1 00:14:06.620 00:14:06.620 ' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.620 --rc genhtml_branch_coverage=1 00:14:06.620 --rc genhtml_function_coverage=1 00:14:06.620 --rc genhtml_legend=1 00:14:06.620 --rc geninfo_all_blocks=1 00:14:06.620 --rc geninfo_unexecuted_blocks=1 00:14:06.620 00:14:06.620 ' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.620 --rc genhtml_branch_coverage=1 00:14:06.620 --rc genhtml_function_coverage=1 00:14:06.620 --rc genhtml_legend=1 00:14:06.620 --rc geninfo_all_blocks=1 00:14:06.620 --rc geninfo_unexecuted_blocks=1 00:14:06.620 00:14:06.620 ' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.620 --rc genhtml_branch_coverage=1 00:14:06.620 --rc genhtml_function_coverage=1 00:14:06.620 --rc genhtml_legend=1 00:14:06.620 --rc geninfo_all_blocks=1 00:14:06.620 --rc geninfo_unexecuted_blocks=1 00:14:06.620 00:14:06.620 ' 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67783 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:06.620 20:42:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67783 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67783 ']' 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.620 20:42:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.878 [2024-11-26 20:42:01.675817] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:06.879 [2024-11-26 20:42:01.676230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67783 ] 00:14:07.137 [2024-11-26 20:42:01.879737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:07.137 [2024-11-26 20:42:02.057265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.137 [2024-11-26 20:42:02.057290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.548 20:42:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.548 20:42:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:08.548 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:08.548 Nvme0n1 00:14:08.548 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:08.548 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:09.116 request: 00:14:09.116 { 00:14:09.116 "bdev_name": "Nvme0n1", 00:14:09.116 "filename": "non_existing_file", 00:14:09.116 "method": "bdev_nvme_apply_firmware", 00:14:09.116 "req_id": 1 00:14:09.116 } 00:14:09.116 Got JSON-RPC error response 00:14:09.116 response: 00:14:09.116 { 00:14:09.116 "code": -32603, 00:14:09.116 "message": "open file failed." 00:14:09.116 } 00:14:09.116 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:09.116 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:09.116 20:42:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:09.374 20:42:04 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:09.374 20:42:04 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67783 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67783 ']' 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67783 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67783 00:14:09.374 killing process with pid 67783 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67783' 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67783 00:14:09.374 20:42:04 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67783 00:14:11.903 00:14:11.903 real 0m5.611s 00:14:11.903 user 0m10.783s 00:14:11.903 sys 0m0.841s 00:14:11.903 20:42:06 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.903 ************************************ 00:14:11.903 END TEST nvme_rpc 00:14:11.903 ************************************ 00:14:11.903 20:42:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.903 20:42:06 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:11.903 20:42:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.903 20:42:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.903 20:42:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.903 ************************************ 00:14:11.903 START TEST nvme_rpc_timeouts 00:14:11.903 ************************************ 00:14:11.903 20:42:06 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:12.161 * Looking for test storage... 00:14:12.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:12.161 20:42:06 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:12.161 20:42:06 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.161 20:42:06 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.161 20:42:07 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.161 --rc genhtml_branch_coverage=1 00:14:12.161 --rc genhtml_function_coverage=1 00:14:12.161 --rc genhtml_legend=1 00:14:12.161 --rc geninfo_all_blocks=1 00:14:12.161 --rc geninfo_unexecuted_blocks=1 00:14:12.161 00:14:12.161 ' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.161 --rc genhtml_branch_coverage=1 00:14:12.161 --rc genhtml_function_coverage=1 00:14:12.161 --rc genhtml_legend=1 00:14:12.161 --rc geninfo_all_blocks=1 00:14:12.161 --rc geninfo_unexecuted_blocks=1 00:14:12.161 00:14:12.161 ' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.161 --rc genhtml_branch_coverage=1 00:14:12.161 --rc genhtml_function_coverage=1 00:14:12.161 --rc genhtml_legend=1 00:14:12.161 --rc geninfo_all_blocks=1 00:14:12.161 --rc geninfo_unexecuted_blocks=1 00:14:12.161 00:14:12.161 ' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.161 --rc genhtml_branch_coverage=1 00:14:12.161 --rc genhtml_function_coverage=1 00:14:12.161 --rc genhtml_legend=1 00:14:12.161 --rc geninfo_all_blocks=1 00:14:12.161 --rc geninfo_unexecuted_blocks=1 00:14:12.161 00:14:12.161 ' 00:14:12.161 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.161 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67870 00:14:12.161 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67870 00:14:12.161 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:12.161 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67902 00:14:12.162 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:12.162 20:42:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67902 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67902 ']' 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.162 20:42:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:12.419 [2024-11-26 20:42:07.181497] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:14:12.419 [2024-11-26 20:42:07.181640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67902 ] 00:14:12.419 [2024-11-26 20:42:07.370206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.677 [2024-11-26 20:42:07.543598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.677 [2024-11-26 20:42:07.543607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.696 20:42:08 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.696 20:42:08 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:14:13.696 Checking default timeout settings: 00:14:13.696 20:42:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:13.696 20:42:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:13.954 Making settings changes with rpc: 00:14:13.954 20:42:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:13.954 20:42:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:14.213 Check default vs. modified settings: 00:14:14.213 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:14.213 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:14.779 Setting action_on_timeout is changed as expected. 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:14.779 Setting timeout_us is changed as expected. 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:14.779 Setting timeout_admin_us is changed as expected. 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67870 /tmp/settings_modified_67870 00:14:14.779 20:42:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67902 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67902 ']' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67902 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67902 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.779 killing process with pid 67902 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67902' 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67902 00:14:14.779 20:42:09 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67902 00:14:17.306 RPC TIMEOUT SETTING TEST PASSED. 00:14:17.306 20:42:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:17.306 00:14:17.306 real 0m5.328s 00:14:17.306 user 0m10.223s 00:14:17.306 sys 0m0.772s 00:14:17.306 20:42:12 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.306 ************************************ 00:14:17.306 END TEST nvme_rpc_timeouts 00:14:17.306 20:42:12 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:17.306 ************************************ 00:14:17.306 20:42:12 -- spdk/autotest.sh@239 -- # uname -s 00:14:17.306 20:42:12 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:17.306 20:42:12 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:17.306 20:42:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:17.306 20:42:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.306 20:42:12 -- common/autotest_common.sh@10 -- # set +x 00:14:17.306 ************************************ 00:14:17.306 START TEST sw_hotplug 00:14:17.306 ************************************ 00:14:17.306 20:42:12 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:17.564 * Looking for test storage... 00:14:17.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.564 20:42:12 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.564 --rc genhtml_branch_coverage=1 00:14:17.564 --rc genhtml_function_coverage=1 00:14:17.564 --rc genhtml_legend=1 00:14:17.564 --rc geninfo_all_blocks=1 00:14:17.564 --rc geninfo_unexecuted_blocks=1 00:14:17.564 00:14:17.564 ' 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.564 --rc genhtml_branch_coverage=1 00:14:17.564 --rc genhtml_function_coverage=1 00:14:17.564 --rc genhtml_legend=1 00:14:17.564 --rc geninfo_all_blocks=1 00:14:17.564 --rc geninfo_unexecuted_blocks=1 00:14:17.564 00:14:17.564 ' 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.564 --rc genhtml_branch_coverage=1 00:14:17.564 --rc genhtml_function_coverage=1 00:14:17.564 --rc genhtml_legend=1 00:14:17.564 --rc geninfo_all_blocks=1 00:14:17.564 --rc geninfo_unexecuted_blocks=1 00:14:17.564 00:14:17.564 ' 00:14:17.564 20:42:12 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.564 --rc genhtml_branch_coverage=1 00:14:17.564 --rc genhtml_function_coverage=1 00:14:17.564 --rc genhtml_legend=1 00:14:17.564 --rc geninfo_all_blocks=1 00:14:17.564 --rc geninfo_unexecuted_blocks=1 00:14:17.564 00:14:17.564 ' 00:14:17.564 20:42:12 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:18.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:18.129 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.129 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.129 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.129 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:18.387 20:42:13 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:18.387 20:42:13 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:18.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:18.902 Waiting for block devices as requested 00:14:18.902 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:19.166 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:19.166 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:19.475 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:24.844 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:24.844 20:42:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:24.844 20:42:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:24.844 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:25.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.103 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:25.361 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:25.619 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:25.619 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:25.619 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:25.619 20:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68782 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:25.878 20:42:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:25.878 20:42:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:25.878 20:42:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:25.878 20:42:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:25.878 20:42:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:25.878 20:42:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:26.136 Initializing NVMe Controllers 00:14:26.136 Attaching to 0000:00:10.0 00:14:26.136 Attaching to 0000:00:11.0 00:14:26.136 Attached to 0000:00:10.0 00:14:26.136 Attached to 0000:00:11.0 00:14:26.136 Initialization complete. Starting I/O... 00:14:26.136 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:26.136 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:26.136 00:14:27.069 QEMU NVMe Ctrl (12340 ): 1104 I/Os completed (+1104) 00:14:27.069 QEMU NVMe Ctrl (12341 ): 1104 I/Os completed (+1104) 00:14:27.069 00:14:28.004 QEMU NVMe Ctrl (12340 ): 2584 I/Os completed (+1480) 00:14:28.004 QEMU NVMe Ctrl (12341 ): 2584 I/Os completed (+1480) 00:14:28.004 00:14:29.379 QEMU NVMe Ctrl (12340 ): 4456 I/Os completed (+1872) 00:14:29.379 QEMU NVMe Ctrl (12341 ): 4457 I/Os completed (+1873) 00:14:29.379 00:14:30.314 QEMU NVMe Ctrl (12340 ): 5930 I/Os completed (+1474) 00:14:30.314 QEMU NVMe Ctrl (12341 ): 5965 I/Os completed (+1508) 00:14:30.314 00:14:31.288 QEMU NVMe Ctrl (12340 ): 7794 I/Os completed (+1864) 00:14:31.288 QEMU NVMe Ctrl (12341 ): 7830 I/Os completed (+1865) 00:14:31.288 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:31.854 [2024-11-26 20:42:26.749524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:31.854 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:31.854 [2024-11-26 20:42:26.751371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.751435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.751460] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.751484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:31.854 [2024-11-26 20:42:26.754815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.754871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.754891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.754912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:31.854 [2024-11-26 20:42:26.792158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:31.854 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:31.854 [2024-11-26 20:42:26.794015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.794067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.794096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.794119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:31.854 [2024-11-26 20:42:26.797216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.797262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.797285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 [2024-11-26 20:42:26.797304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:31.854 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.854 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:31.854 EAL: Scan for (pci) bus failed. 00:14:32.111 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:32.111 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:32.111 20:42:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:32.111 00:14:32.111 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:32.111 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:32.111 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:32.111 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:32.111 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:32.111 Attaching to 0000:00:10.0 00:14:32.111 Attached to 0000:00:10.0 00:14:32.369 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:32.369 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:32.369 20:42:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:32.369 Attaching to 0000:00:11.0 00:14:32.369 Attached to 0000:00:11.0 00:14:33.305 QEMU NVMe Ctrl (12340 ): 1671 I/Os completed (+1671) 00:14:33.305 QEMU NVMe Ctrl (12341 ): 1493 I/Os completed (+1493) 00:14:33.305 00:14:34.239 QEMU NVMe Ctrl (12340 ): 3683 I/Os completed (+2012) 00:14:34.239 QEMU NVMe Ctrl (12341 ): 3520 I/Os completed (+2027) 00:14:34.239 00:14:35.174 QEMU NVMe Ctrl (12340 ): 5591 I/Os completed (+1908) 00:14:35.174 QEMU NVMe Ctrl (12341 ): 5437 I/Os completed (+1917) 00:14:35.174 00:14:36.110 QEMU NVMe Ctrl (12340 ): 7435 I/Os completed (+1844) 00:14:36.110 QEMU NVMe Ctrl (12341 ): 7281 I/Os completed (+1844) 00:14:36.110 00:14:37.045 QEMU NVMe Ctrl (12340 ): 9127 I/Os completed (+1692) 00:14:37.045 QEMU NVMe Ctrl (12341 ): 8997 I/Os completed (+1716) 00:14:37.045 00:14:38.426 QEMU NVMe Ctrl (12340 ): 10891 I/Os completed (+1764) 00:14:38.426 QEMU NVMe Ctrl (12341 ): 10852 I/Os completed (+1855) 00:14:38.426 00:14:38.993 QEMU NVMe Ctrl (12340 ): 12711 I/Os completed (+1820) 00:14:38.993 QEMU NVMe Ctrl (12341 ): 12672 I/Os completed (+1820) 00:14:38.993 00:14:40.369 QEMU NVMe Ctrl (12340 ): 14735 I/Os completed (+2024) 00:14:40.369 QEMU NVMe Ctrl (12341 ): 14696 I/Os completed (+2024) 00:14:40.369 00:14:41.305 QEMU NVMe Ctrl (12340 ): 16743 I/Os completed (+2008) 00:14:41.305 QEMU NVMe Ctrl (12341 ): 16704 I/Os completed (+2008) 00:14:41.305 00:14:42.239 QEMU NVMe Ctrl (12340 ): 18767 I/Os completed (+2024) 00:14:42.239 QEMU NVMe Ctrl (12341 ): 18730 I/Os completed (+2026) 00:14:42.239 00:14:43.175 QEMU NVMe Ctrl (12340 ): 20775 I/Os completed (+2008) 00:14:43.175 QEMU NVMe Ctrl (12341 ): 20746 I/Os completed (+2016) 00:14:43.175 00:14:44.111 QEMU NVMe Ctrl (12340 ): 22733 I/Os completed (+1958) 00:14:44.111 QEMU NVMe Ctrl (12341 ): 22702 I/Os completed (+1956) 00:14:44.111 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:44.369 [2024-11-26 20:42:39.180404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:44.369 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:44.369 [2024-11-26 20:42:39.184981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.185277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.185521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.185748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:44.369 [2024-11-26 20:42:39.192119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.192373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.192581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.192819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:44.369 [2024-11-26 20:42:39.237476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:44.369 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:44.369 [2024-11-26 20:42:39.241415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.241515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.241576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.241645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:44.369 [2024-11-26 20:42:39.247184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.247268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.247316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 [2024-11-26 20:42:39.247361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:44.369 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:44.369 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:44.369 EAL: Scan for (pci) bus failed. 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:44.627 Attaching to 0000:00:10.0 00:14:44.627 Attached to 0000:00:10.0 00:14:44.627 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:44.885 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.885 20:42:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:44.885 Attaching to 0000:00:11.0 00:14:44.885 Attached to 0000:00:11.0 00:14:45.143 QEMU NVMe Ctrl (12340 ): 876 I/Os completed (+876) 00:14:45.143 QEMU NVMe Ctrl (12341 ): 660 I/Os completed (+660) 00:14:45.143 00:14:46.079 QEMU NVMe Ctrl (12340 ): 2572 I/Os completed (+1696) 00:14:46.079 QEMU NVMe Ctrl (12341 ): 2366 I/Os completed (+1706) 00:14:46.079 00:14:47.013 QEMU NVMe Ctrl (12340 ): 4039 I/Os completed (+1467) 00:14:47.013 QEMU NVMe Ctrl (12341 ): 3891 I/Os completed (+1525) 00:14:47.013 00:14:48.385 QEMU NVMe Ctrl (12340 ): 5816 I/Os completed (+1777) 00:14:48.385 QEMU NVMe Ctrl (12341 ): 5712 I/Os completed (+1821) 00:14:48.385 00:14:49.339 QEMU NVMe Ctrl (12340 ): 7796 I/Os completed (+1980) 00:14:49.339 QEMU NVMe Ctrl (12341 ): 7697 I/Os completed (+1985) 00:14:49.339 00:14:50.273 QEMU NVMe Ctrl (12340 ): 9760 I/Os completed (+1964) 00:14:50.273 QEMU NVMe Ctrl (12341 ): 9665 I/Os completed (+1968) 00:14:50.273 00:14:51.209 QEMU NVMe Ctrl (12340 ): 11480 I/Os completed (+1720) 00:14:51.209 QEMU NVMe Ctrl (12341 ): 11392 I/Os completed (+1727) 00:14:51.209 00:14:52.144 QEMU NVMe Ctrl (12340 ): 13197 I/Os completed (+1717) 00:14:52.144 QEMU NVMe Ctrl (12341 ): 13175 I/Os completed (+1783) 00:14:52.144 00:14:53.077 QEMU NVMe Ctrl (12340 ): 14773 I/Os completed (+1576) 00:14:53.077 QEMU NVMe Ctrl (12341 ): 14758 I/Os completed (+1583) 00:14:53.077 00:14:54.090 QEMU NVMe Ctrl (12340 ): 16693 I/Os completed (+1920) 00:14:54.090 QEMU NVMe Ctrl (12341 ): 16678 I/Os completed (+1920) 00:14:54.090 00:14:55.025 QEMU NVMe Ctrl (12340 ): 18581 I/Os completed (+1888) 00:14:55.025 QEMU NVMe Ctrl (12341 ): 18568 I/Os completed (+1890) 00:14:55.025 00:14:56.400 QEMU NVMe Ctrl (12340 ): 20304 I/Os completed (+1723) 00:14:56.400 QEMU NVMe Ctrl (12341 ): 20283 I/Os completed (+1715) 00:14:56.400 00:14:56.659 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:56.659 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:56.659 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:56.659 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:56.659 [2024-11-26 20:42:51.648460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:56.659 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:56.659 [2024-11-26 20:42:51.651528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.651837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.651885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.651922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:56.917 [2024-11-26 20:42:51.656306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.656377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.656412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.656445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:56.917 [2024-11-26 20:42:51.684141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:56.917 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:56.917 [2024-11-26 20:42:51.687314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.687390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.687431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.687466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:56.917 [2024-11-26 20:42:51.691770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.691994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.692049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 [2024-11-26 20:42:51.692082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.917 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:56.917 EAL: Scan for (pci) bus failed. 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:56.917 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:56.918 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:57.175 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:57.175 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:57.175 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:57.176 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:57.176 20:42:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:57.176 Attaching to 0000:00:10.0 00:14:57.176 Attached to 0000:00:10.0 00:14:57.176 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:57.176 00:14:57.176 20:42:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:57.176 20:42:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:57.176 20:42:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:57.176 Attaching to 0000:00:11.0 00:14:57.176 Attached to 0000:00:11.0 00:14:57.176 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:57.176 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:57.176 [2024-11-26 20:42:52.095825] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:09.383 20:43:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:09.383 20:43:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:09.383 20:43:04 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.34 00:15:09.383 20:43:04 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.34 00:15:09.383 20:43:04 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:09.383 20:43:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.34 00:15:09.383 20:43:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.34 2 00:15:09.383 remove_attach_helper took 43.34s to complete (handling 2 nvme drive(s)) 20:43:04 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68782 00:15:15.949 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68782) - No such process 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68782 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69326 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:15.949 20:43:10 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69326 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69326 ']' 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.949 20:43:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.949 [2024-11-26 20:43:10.243574] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:15:15.950 [2024-11-26 20:43:10.244016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69326 ] 00:15:15.950 [2024-11-26 20:43:10.442929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.950 [2024-11-26 20:43:10.614403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.884 20:43:11 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.884 20:43:11 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:15:16.884 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:16.884 20:43:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.884 20:43:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:16.885 20:43:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:16.885 20:43:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.442 20:43:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.442 20:43:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.442 [2024-11-26 20:43:17.650496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:23.442 [2024-11-26 20:43:17.653480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:17.653526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:17.653550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:17.653580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:17.653594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:17.653627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:17.653644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:17.653660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:17.653674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:17.653697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:17.653710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:17.653726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 20:43:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:23.442 20:43:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:23.442 [2024-11-26 20:43:18.050506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:23.442 [2024-11-26 20:43:18.053231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:18.053277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:18.053298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:18.053323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:18.053338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:18.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:18.053366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:18.053378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:18.053392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 [2024-11-26 20:43:18.053405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.442 [2024-11-26 20:43:18.053419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.442 [2024-11-26 20:43:18.053431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.442 20:43:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.442 20:43:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.442 20:43:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:23.442 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.700 20:43:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:35.930 [2024-11-26 20:43:30.750810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:35.930 [2024-11-26 20:43:30.753908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.930 [2024-11-26 20:43:30.754072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-11-26 20:43:30.754216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-11-26 20:43:30.754365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.930 [2024-11-26 20:43:30.754482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-11-26 20:43:30.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-11-26 20:43:30.754785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.930 [2024-11-26 20:43:30.754837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-11-26 20:43:30.754954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 [2024-11-26 20:43:30.755096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.930 [2024-11-26 20:43:30.755141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.930 [2024-11-26 20:43:30.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.930 20:43:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:35.930 20:43:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:36.499 [2024-11-26 20:43:31.250769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:36.499 [2024-11-26 20:43:31.255409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.499 [2024-11-26 20:43:31.255570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.499 [2024-11-26 20:43:31.255741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.499 [2024-11-26 20:43:31.255863] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.499 [2024-11-26 20:43:31.255959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.499 [2024-11-26 20:43:31.256061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.499 [2024-11-26 20:43:31.256174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.499 [2024-11-26 20:43:31.256216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.499 [2024-11-26 20:43:31.256319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.499 [2024-11-26 20:43:31.256380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:36.499 [2024-11-26 20:43:31.256460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.499 [2024-11-26 20:43:31.256571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.499 20:43:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.499 20:43:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.499 20:43:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.499 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:36.758 20:43:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.020 20:43:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.020 20:43:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.020 20:43:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:49.020 [2024-11-26 20:43:43.751041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:49.020 [2024-11-26 20:43:43.754045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.020 [2024-11-26 20:43:43.754280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.020 [2024-11-26 20:43:43.754406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.020 [2024-11-26 20:43:43.754505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.020 [2024-11-26 20:43:43.754524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.020 [2024-11-26 20:43:43.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.020 [2024-11-26 20:43:43.754560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.020 [2024-11-26 20:43:43.754575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.020 [2024-11-26 20:43:43.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.020 [2024-11-26 20:43:43.754605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.020 [2024-11-26 20:43:43.754631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.020 [2024-11-26 20:43:43.754658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.020 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.021 20:43:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.021 20:43:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.021 20:43:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.021 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:49.021 20:43:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:49.278 [2024-11-26 20:43:44.251026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:49.278 [2024-11-26 20:43:44.253585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.278 [2024-11-26 20:43:44.253673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.278 [2024-11-26 20:43:44.253698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.278 [2024-11-26 20:43:44.253722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.278 [2024-11-26 20:43:44.253739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.278 [2024-11-26 20:43:44.253751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.278 [2024-11-26 20:43:44.253771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.278 [2024-11-26 20:43:44.253783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.278 [2024-11-26 20:43:44.253804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.278 [2024-11-26 20:43:44.253817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:49.278 [2024-11-26 20:43:44.253834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.278 [2024-11-26 20:43:44.253846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:49.536 20:43:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.536 20:43:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:49.536 20:43:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.536 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:49.794 20:43:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.24 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.24 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:16:02.132 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:02.132 20:43:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:02.132 20:43:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:08.701 20:44:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:08.701 20:44:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:08.701 [2024-11-26 20:44:02.930529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:08.701 [2024-11-26 20:44:02.933063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.701 [2024-11-26 20:44:02.933112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.701 [2024-11-26 20:44:02.933131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.701 [2024-11-26 20:44:02.933160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.701 [2024-11-26 20:44:02.933173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.701 [2024-11-26 20:44:02.933189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.701 [2024-11-26 20:44:02.933203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.701 [2024-11-26 20:44:02.933218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.701 [2024-11-26 20:44:02.933231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.701 [2024-11-26 20:44:02.933247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.701 [2024-11-26 20:44:02.933259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.701 [2024-11-26 20:44:02.933278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.701 20:44:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:08.701 20:44:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:08.701 20:44:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.701 20:44:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:08.701 20:44:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:08.701 20:44:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:08.701 [2024-11-26 20:44:03.630560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:08.701 [2024-11-26 20:44:03.632401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.701 [2024-11-26 20:44:03.632442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.702 [2024-11-26 20:44:03.632465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-11-26 20:44:03.632490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.702 [2024-11-26 20:44:03.632507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.702 [2024-11-26 20:44:03.632521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-11-26 20:44:03.632539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.702 [2024-11-26 20:44:03.632551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.702 [2024-11-26 20:44:03.632568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-11-26 20:44:03.632580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.702 [2024-11-26 20:44:03.632594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.702 [2024-11-26 20:44:03.632606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:09.280 20:44:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.280 20:44:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 20:44:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:09.280 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:09.572 20:44:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:21.773 [2024-11-26 20:44:16.530836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:21.773 [2024-11-26 20:44:16.532846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:21.773 [2024-11-26 20:44:16.532897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.773 [2024-11-26 20:44:16.532915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.773 [2024-11-26 20:44:16.532942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:21.773 [2024-11-26 20:44:16.532954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.773 [2024-11-26 20:44:16.532969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.773 [2024-11-26 20:44:16.532982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:21.773 [2024-11-26 20:44:16.532996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.773 [2024-11-26 20:44:16.533008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.773 [2024-11-26 20:44:16.533023] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:21.773 [2024-11-26 20:44:16.533035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.773 [2024-11-26 20:44:16.533049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:21.773 20:44:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:21.773 20:44:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:22.032 [2024-11-26 20:44:16.930846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:22.032 [2024-11-26 20:44:16.932647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.032 [2024-11-26 20:44:16.932689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.032 [2024-11-26 20:44:16.932709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.032 [2024-11-26 20:44:16.932733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.032 [2024-11-26 20:44:16.932752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.032 [2024-11-26 20:44:16.932765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.032 [2024-11-26 20:44:16.932781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.032 [2024-11-26 20:44:16.932792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.032 [2024-11-26 20:44:16.932807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.032 [2024-11-26 20:44:16.932820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:22.033 [2024-11-26 20:44:16.932833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.033 [2024-11-26 20:44:16.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:22.291 20:44:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.291 20:44:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:22.291 20:44:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:22.291 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:22.549 20:44:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:34.750 20:44:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.750 20:44:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:34.750 20:44:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:34.750 20:44:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:34.750 20:44:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:34.750 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:34.750 [2024-11-26 20:44:29.631129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:34.750 [2024-11-26 20:44:29.633205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:34.750 [2024-11-26 20:44:29.633253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.750 [2024-11-26 20:44:29.633271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.750 [2024-11-26 20:44:29.633299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:34.750 [2024-11-26 20:44:29.633312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.750 [2024-11-26 20:44:29.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.750 [2024-11-26 20:44:29.633345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:34.750 [2024-11-26 20:44:29.633364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.750 [2024-11-26 20:44:29.633377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.751 [2024-11-26 20:44:29.633394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:34.751 [2024-11-26 20:44:29.633406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.751 [2024-11-26 20:44:29.633421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.751 20:44:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.751 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:34.751 20:44:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:35.317 [2024-11-26 20:44:30.031155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:35.317 [2024-11-26 20:44:30.033261] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.317 [2024-11-26 20:44:30.033305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.317 [2024-11-26 20:44:30.033326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.317 [2024-11-26 20:44:30.033350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.317 [2024-11-26 20:44:30.033369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.317 [2024-11-26 20:44:30.033383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.317 [2024-11-26 20:44:30.033402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.317 [2024-11-26 20:44:30.033414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.317 [2024-11-26 20:44:30.033430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.317 [2024-11-26 20:44:30.033444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:35.317 [2024-11-26 20:44:30.033463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:35.317 [2024-11-26 20:44:30.033476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:35.317 20:44:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.317 20:44:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:35.317 20:44:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:35.317 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:35.576 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:35.835 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:35.835 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:35.835 20:44:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.81 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.81 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.81 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.81 2 00:16:48.040 remove_attach_helper took 45.81s to complete (handling 2 nvme drive(s)) 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:48.040 20:44:42 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69326 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69326 ']' 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69326 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69326 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.040 killing process with pid 69326 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69326' 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69326 00:16:48.040 20:44:42 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69326 00:16:50.573 20:44:45 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:50.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.140 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:51.140 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:51.140 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.140 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.399 00:16:51.399 real 2m33.913s 00:16:51.399 user 1m51.992s 00:16:51.399 sys 0m22.596s 00:16:51.399 20:44:46 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.399 ************************************ 00:16:51.399 END TEST sw_hotplug 00:16:51.399 ************************************ 00:16:51.399 20:44:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:51.399 20:44:46 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:51.399 20:44:46 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:51.399 20:44:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.399 20:44:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.399 20:44:46 -- common/autotest_common.sh@10 -- # set +x 00:16:51.399 ************************************ 00:16:51.399 START TEST nvme_xnvme 00:16:51.399 ************************************ 00:16:51.399 20:44:46 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:51.399 * Looking for test storage... 00:16:51.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.399 20:44:46 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.399 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.399 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.659 20:44:46 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.659 --rc genhtml_branch_coverage=1 00:16:51.659 --rc genhtml_function_coverage=1 00:16:51.659 --rc genhtml_legend=1 00:16:51.659 --rc geninfo_all_blocks=1 00:16:51.659 --rc geninfo_unexecuted_blocks=1 00:16:51.659 00:16:51.659 ' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.659 --rc genhtml_branch_coverage=1 00:16:51.659 --rc genhtml_function_coverage=1 00:16:51.659 --rc genhtml_legend=1 00:16:51.659 --rc geninfo_all_blocks=1 00:16:51.659 --rc geninfo_unexecuted_blocks=1 00:16:51.659 00:16:51.659 ' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.659 --rc genhtml_branch_coverage=1 00:16:51.659 --rc genhtml_function_coverage=1 00:16:51.659 --rc genhtml_legend=1 00:16:51.659 --rc geninfo_all_blocks=1 00:16:51.659 --rc geninfo_unexecuted_blocks=1 00:16:51.659 00:16:51.659 ' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.659 --rc genhtml_branch_coverage=1 00:16:51.659 --rc genhtml_function_coverage=1 00:16:51.659 --rc genhtml_legend=1 00:16:51.659 --rc geninfo_all_blocks=1 00:16:51.659 --rc geninfo_unexecuted_blocks=1 00:16:51.659 00:16:51.659 ' 00:16:51.659 20:44:46 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:51.659 20:44:46 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:51.659 20:44:46 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:51.659 20:44:46 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:51.660 20:44:46 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:51.660 20:44:46 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:51.660 20:44:46 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:51.660 #define SPDK_CONFIG_H 00:16:51.660 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:51.660 #define SPDK_CONFIG_APPS 1 00:16:51.660 #define SPDK_CONFIG_ARCH native 00:16:51.660 #define SPDK_CONFIG_ASAN 1 00:16:51.660 #undef SPDK_CONFIG_AVAHI 00:16:51.660 #undef SPDK_CONFIG_CET 00:16:51.660 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:51.660 #define SPDK_CONFIG_COVERAGE 1 00:16:51.660 #define SPDK_CONFIG_CROSS_PREFIX 00:16:51.660 #undef SPDK_CONFIG_CRYPTO 00:16:51.660 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:51.660 #undef SPDK_CONFIG_CUSTOMOCF 00:16:51.660 #undef SPDK_CONFIG_DAOS 00:16:51.660 #define SPDK_CONFIG_DAOS_DIR 00:16:51.660 #define SPDK_CONFIG_DEBUG 1 00:16:51.660 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:51.660 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:51.660 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:51.660 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:51.660 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:51.660 #undef SPDK_CONFIG_DPDK_UADK 00:16:51.660 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:51.660 #define SPDK_CONFIG_EXAMPLES 1 00:16:51.660 #undef SPDK_CONFIG_FC 00:16:51.660 #define SPDK_CONFIG_FC_PATH 00:16:51.660 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:51.660 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:51.660 #define SPDK_CONFIG_FSDEV 1 00:16:51.660 #undef SPDK_CONFIG_FUSE 00:16:51.660 #undef SPDK_CONFIG_FUZZER 00:16:51.660 #define SPDK_CONFIG_FUZZER_LIB 00:16:51.660 #undef SPDK_CONFIG_GOLANG 00:16:51.660 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:51.660 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:51.660 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:51.660 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:51.660 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:51.660 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:51.660 #undef SPDK_CONFIG_HAVE_LZ4 00:16:51.660 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:51.660 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:51.660 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:51.660 #define SPDK_CONFIG_IDXD 1 00:16:51.660 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:51.660 #undef SPDK_CONFIG_IPSEC_MB 00:16:51.660 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:51.660 #define SPDK_CONFIG_ISAL 1 00:16:51.660 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:51.660 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:51.660 #define SPDK_CONFIG_LIBDIR 00:16:51.660 #undef SPDK_CONFIG_LTO 00:16:51.660 #define SPDK_CONFIG_MAX_LCORES 128 00:16:51.660 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:51.660 #define SPDK_CONFIG_NVME_CUSE 1 00:16:51.660 #undef SPDK_CONFIG_OCF 00:16:51.660 #define SPDK_CONFIG_OCF_PATH 00:16:51.660 #define SPDK_CONFIG_OPENSSL_PATH 00:16:51.660 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:51.660 #define SPDK_CONFIG_PGO_DIR 00:16:51.660 #undef SPDK_CONFIG_PGO_USE 00:16:51.660 #define SPDK_CONFIG_PREFIX /usr/local 00:16:51.660 #undef SPDK_CONFIG_RAID5F 00:16:51.660 #undef SPDK_CONFIG_RBD 00:16:51.660 #define SPDK_CONFIG_RDMA 1 00:16:51.660 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:51.660 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:51.660 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:51.660 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:51.660 #define SPDK_CONFIG_SHARED 1 00:16:51.660 #undef SPDK_CONFIG_SMA 00:16:51.660 #define SPDK_CONFIG_TESTS 1 00:16:51.660 #undef SPDK_CONFIG_TSAN 00:16:51.661 #define SPDK_CONFIG_UBLK 1 00:16:51.661 #define SPDK_CONFIG_UBSAN 1 00:16:51.661 #undef SPDK_CONFIG_UNIT_TESTS 00:16:51.661 #undef SPDK_CONFIG_URING 00:16:51.661 #define SPDK_CONFIG_URING_PATH 00:16:51.661 #undef SPDK_CONFIG_URING_ZNS 00:16:51.661 #undef SPDK_CONFIG_USDT 00:16:51.661 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:51.661 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:51.661 #undef SPDK_CONFIG_VFIO_USER 00:16:51.661 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:51.661 #define SPDK_CONFIG_VHOST 1 00:16:51.661 #define SPDK_CONFIG_VIRTIO 1 00:16:51.661 #undef SPDK_CONFIG_VTUNE 00:16:51.661 #define SPDK_CONFIG_VTUNE_DIR 00:16:51.661 #define SPDK_CONFIG_WERROR 1 00:16:51.661 #define SPDK_CONFIG_WPDK_DIR 00:16:51.661 #define SPDK_CONFIG_XNVME 1 00:16:51.661 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:51.661 20:44:46 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.661 20:44:46 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.661 20:44:46 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.661 20:44:46 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.661 20:44:46 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.661 20:44:46 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.661 20:44:46 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.661 20:44:46 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.661 20:44:46 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:51.661 20:44:46 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:51.661 20:44:46 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:51.661 20:44:46 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70678 ]] 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70678 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:51.662 20:44:46 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xwwJkm 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.xwwJkm/tests/xnvme /tmp/spdk.xwwJkm 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13973295104 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5594558464 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13973295104 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5594558464 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95541874688 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4160905216 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:51.663 * Looking for test storage... 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13973295104 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.663 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.921 20:44:46 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.921 --rc genhtml_branch_coverage=1 00:16:51.921 --rc genhtml_function_coverage=1 00:16:51.921 --rc genhtml_legend=1 00:16:51.921 --rc geninfo_all_blocks=1 00:16:51.921 --rc geninfo_unexecuted_blocks=1 00:16:51.921 00:16:51.921 ' 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.921 --rc genhtml_branch_coverage=1 00:16:51.921 --rc genhtml_function_coverage=1 00:16:51.921 --rc genhtml_legend=1 00:16:51.921 --rc geninfo_all_blocks=1 00:16:51.921 --rc geninfo_unexecuted_blocks=1 00:16:51.921 00:16:51.921 ' 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.921 --rc genhtml_branch_coverage=1 00:16:51.921 --rc genhtml_function_coverage=1 00:16:51.921 --rc genhtml_legend=1 00:16:51.921 --rc geninfo_all_blocks=1 00:16:51.921 --rc geninfo_unexecuted_blocks=1 00:16:51.921 00:16:51.921 ' 00:16:51.921 20:44:46 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.921 --rc genhtml_branch_coverage=1 00:16:51.921 --rc genhtml_function_coverage=1 00:16:51.921 --rc genhtml_legend=1 00:16:51.921 --rc geninfo_all_blocks=1 00:16:51.921 --rc geninfo_unexecuted_blocks=1 00:16:51.921 00:16:51.921 ' 00:16:51.922 20:44:46 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.922 20:44:46 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.922 20:44:46 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.922 20:44:46 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.922 20:44:46 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.922 20:44:46 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.922 20:44:46 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.922 20:44:46 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.922 20:44:46 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:51.922 20:44:46 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:51.922 20:44:46 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:52.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.437 Waiting for block devices as requested 00:16:52.437 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.695 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.695 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.954 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.251 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:58.251 20:44:52 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:58.251 20:44:53 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:58.251 20:44:53 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:58.509 20:44:53 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:58.509 20:44:53 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:58.509 20:44:53 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:58.509 20:44:53 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:58.509 20:44:53 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:58.767 No valid GPT data, bailing 00:16:58.767 20:44:53 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:58.767 20:44:53 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:58.767 20:44:53 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:58.767 20:44:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:58.767 20:44:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.767 20:44:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.767 20:44:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.767 ************************************ 00:16:58.767 START TEST xnvme_rpc 00:16:58.767 ************************************ 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71067 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71067 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.767 20:44:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:58.767 [2024-11-26 20:44:53.687680] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:16:58.767 [2024-11-26 20:44:53.687876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:16:59.026 [2024-11-26 20:44:53.890466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.283 [2024-11-26 20:44:54.070736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 xnvme_bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71067 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71067 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.283 killing process with pid 71067 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71067 00:17:00.283 20:44:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71067 00:17:02.812 00:17:02.812 real 0m4.188s 00:17:02.812 user 0m4.285s 00:17:02.812 sys 0m0.593s 00:17:02.812 20:44:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.812 20:44:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.812 ************************************ 00:17:02.812 END TEST xnvme_rpc 00:17:02.812 ************************************ 00:17:02.812 20:44:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:02.812 20:44:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.812 20:44:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.812 20:44:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.812 ************************************ 00:17:02.812 START TEST xnvme_bdevperf 00:17:02.812 ************************************ 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:02.812 20:44:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:03.071 { 00:17:03.071 "subsystems": [ 00:17:03.071 { 00:17:03.071 "subsystem": "bdev", 00:17:03.071 "config": [ 00:17:03.071 { 00:17:03.071 "params": { 00:17:03.071 "io_mechanism": "libaio", 00:17:03.071 "conserve_cpu": false, 00:17:03.071 "filename": "/dev/nvme0n1", 00:17:03.071 "name": "xnvme_bdev" 00:17:03.071 }, 00:17:03.071 "method": "bdev_xnvme_create" 00:17:03.071 }, 00:17:03.071 { 00:17:03.071 "method": "bdev_wait_for_examine" 00:17:03.071 } 00:17:03.071 ] 00:17:03.071 } 00:17:03.071 ] 00:17:03.071 } 00:17:03.071 [2024-11-26 20:44:57.897794] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:03.071 [2024-11-26 20:44:57.897983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71152 ] 00:17:03.329 [2024-11-26 20:44:58.099146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.329 [2024-11-26 20:44:58.221156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.897 Running I/O for 5 seconds... 00:17:05.766 37929.00 IOPS, 148.16 MiB/s [2024-11-26T20:45:01.694Z] 35459.00 IOPS, 138.51 MiB/s [2024-11-26T20:45:02.629Z] 35290.33 IOPS, 137.85 MiB/s [2024-11-26T20:45:04.006Z] 34783.00 IOPS, 135.87 MiB/s 00:17:09.012 Latency(us) 00:17:09.012 [2024-11-26T20:45:04.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.012 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:09.013 xnvme_bdev : 5.00 35260.01 137.73 0.00 0.00 1811.03 180.42 5617.37 00:17:09.013 [2024-11-26T20:45:04.007Z] =================================================================================================================== 00:17:09.013 [2024-11-26T20:45:04.007Z] Total : 35260.01 137.73 0.00 0.00 1811.03 180.42 5617.37 00:17:09.949 20:45:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:09.949 20:45:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:09.949 20:45:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:09.949 20:45:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:09.949 20:45:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:09.949 { 00:17:09.949 "subsystems": [ 00:17:09.949 { 00:17:09.949 "subsystem": "bdev", 00:17:09.949 "config": [ 00:17:09.949 { 00:17:09.949 "params": { 00:17:09.949 "io_mechanism": "libaio", 00:17:09.950 "conserve_cpu": false, 00:17:09.950 "filename": "/dev/nvme0n1", 00:17:09.950 "name": "xnvme_bdev" 00:17:09.950 }, 00:17:09.950 "method": "bdev_xnvme_create" 00:17:09.950 }, 00:17:09.950 { 00:17:09.950 "method": "bdev_wait_for_examine" 00:17:09.950 } 00:17:09.950 ] 00:17:09.950 } 00:17:09.950 ] 00:17:09.950 } 00:17:09.950 [2024-11-26 20:45:04.906117] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:09.950 [2024-11-26 20:45:04.906847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71234 ] 00:17:10.209 [2024-11-26 20:45:05.077583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.209 [2024-11-26 20:45:05.197199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.776 Running I/O for 5 seconds... 00:17:12.647 28946.00 IOPS, 113.07 MiB/s [2024-11-26T20:45:09.018Z] 31354.50 IOPS, 122.48 MiB/s [2024-11-26T20:45:09.613Z] 28822.67 IOPS, 112.59 MiB/s [2024-11-26T20:45:10.991Z] 22284.50 IOPS, 87.05 MiB/s [2024-11-26T20:45:10.991Z] 18686.80 IOPS, 73.00 MiB/s 00:17:15.997 Latency(us) 00:17:15.997 [2024-11-26T20:45:10.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.997 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:15.997 xnvme_bdev : 5.01 18652.45 72.86 0.00 0.00 3422.28 62.42 90377.26 00:17:15.997 [2024-11-26T20:45:10.991Z] =================================================================================================================== 00:17:15.997 [2024-11-26T20:45:10.991Z] Total : 18652.45 72.86 0.00 0.00 3422.28 62.42 90377.26 00:17:16.936 00:17:16.936 real 0m13.993s 00:17:16.936 user 0m6.868s 00:17:16.936 sys 0m4.689s 00:17:16.936 20:45:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.936 20:45:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:16.936 ************************************ 00:17:16.936 END TEST xnvme_bdevperf 00:17:16.936 ************************************ 00:17:16.936 20:45:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:16.936 20:45:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.936 20:45:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.936 20:45:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.936 ************************************ 00:17:16.936 START TEST xnvme_fio_plugin 00:17:16.936 ************************************ 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:16.936 20:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:16.936 { 00:17:16.936 "subsystems": [ 00:17:16.936 { 00:17:16.936 "subsystem": "bdev", 00:17:16.936 "config": [ 00:17:16.936 { 00:17:16.936 "params": { 00:17:16.936 "io_mechanism": "libaio", 00:17:16.936 "conserve_cpu": false, 00:17:16.936 "filename": "/dev/nvme0n1", 00:17:16.936 "name": "xnvme_bdev" 00:17:16.936 }, 00:17:16.936 "method": "bdev_xnvme_create" 00:17:16.936 }, 00:17:16.936 { 00:17:16.936 "method": "bdev_wait_for_examine" 00:17:16.936 } 00:17:16.936 ] 00:17:16.936 } 00:17:16.936 ] 00:17:16.936 } 00:17:17.196 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:17.196 fio-3.35 00:17:17.196 Starting 1 thread 00:17:23.757 00:17:23.757 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71359: Tue Nov 26 20:45:17 2024 00:17:23.757 read: IOPS=32.4k, BW=126MiB/s (133MB/s)(632MiB/5001msec) 00:17:23.757 slat (usec): min=4, max=2238, avg=26.11, stdev=32.79 00:17:23.757 clat (usec): min=9, max=10221, avg=1246.87, stdev=892.26 00:17:23.757 lat (usec): min=51, max=10228, avg=1272.98, stdev=894.31 00:17:23.757 clat percentiles (usec): 00:17:23.757 | 1.00th=[ 194], 5.00th=[ 310], 10.00th=[ 400], 20.00th=[ 545], 00:17:23.757 | 30.00th=[ 693], 40.00th=[ 848], 50.00th=[ 1029], 60.00th=[ 1237], 00:17:23.757 | 70.00th=[ 1483], 80.00th=[ 1795], 90.00th=[ 2278], 95.00th=[ 2966], 00:17:23.757 | 99.00th=[ 4490], 99.50th=[ 5014], 99.90th=[ 6718], 99.95th=[ 7373], 00:17:23.757 | 99.99th=[ 8717] 00:17:23.757 bw ( KiB/s): min=102544, max=180752, per=100.00%, avg=129444.44, stdev=27878.26, samples=9 00:17:23.757 iops : min=25636, max=45188, avg=32361.11, stdev=6969.57, samples=9 00:17:23.757 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.23%, 250=2.14% 00:17:23.757 lat (usec) : 500=14.36%, 750=17.15%, 1000=14.39% 00:17:23.757 lat (msec) : 2=36.67%, 4=13.16%, 10=1.89%, 20=0.01% 00:17:23.757 cpu : usr=26.08%, sys=51.74%, ctx=67, majf=0, minf=764 00:17:23.757 IO depths : 1=0.1%, 2=0.6%, 4=3.1%, 8=8.8%, 16=22.8%, 32=61.8%, >=64=2.9% 00:17:23.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.757 complete : 0=0.0%, 4=97.8%, 8=0.2%, 16=0.2%, 32=0.3%, 64=1.6%, >=64=0.0% 00:17:23.757 issued rwts: total=161811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.757 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:23.757 00:17:23.757 Run status group 0 (all jobs): 00:17:23.757 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=632MiB (663MB), run=5001-5001msec 00:17:24.698 ----------------------------------------------------- 00:17:24.698 Suppressions used: 00:17:24.698 count bytes template 00:17:24.698 1 11 /usr/src/fio/parse.c 00:17:24.698 1 8 libtcmalloc_minimal.so 00:17:24.698 1 904 libcrypto.so 00:17:24.698 ----------------------------------------------------- 00:17:24.698 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:24.698 20:45:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:24.698 { 00:17:24.698 "subsystems": [ 00:17:24.698 { 00:17:24.698 "subsystem": "bdev", 00:17:24.698 "config": [ 00:17:24.698 { 00:17:24.698 "params": { 00:17:24.698 "io_mechanism": "libaio", 00:17:24.698 "conserve_cpu": false, 00:17:24.698 "filename": "/dev/nvme0n1", 00:17:24.698 "name": "xnvme_bdev" 00:17:24.698 }, 00:17:24.698 "method": "bdev_xnvme_create" 00:17:24.698 }, 00:17:24.698 { 00:17:24.698 "method": "bdev_wait_for_examine" 00:17:24.698 } 00:17:24.698 ] 00:17:24.698 } 00:17:24.698 ] 00:17:24.698 } 00:17:24.956 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:24.956 fio-3.35 00:17:24.956 Starting 1 thread 00:17:31.519 00:17:31.519 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71457: Tue Nov 26 20:45:25 2024 00:17:31.519 write: IOPS=29.3k, BW=115MiB/s (120MB/s)(573MiB/5001msec); 0 zone resets 00:17:31.519 slat (usec): min=5, max=959, avg=30.45, stdev=27.13 00:17:31.519 clat (usec): min=55, max=5616, avg=1225.07, stdev=664.89 00:17:31.519 lat (usec): min=101, max=5663, avg=1255.52, stdev=667.59 00:17:31.519 clat percentiles (usec): 00:17:31.519 | 1.00th=[ 235], 5.00th=[ 334], 10.00th=[ 433], 20.00th=[ 611], 00:17:31.519 | 30.00th=[ 783], 40.00th=[ 947], 50.00th=[ 1123], 60.00th=[ 1336], 00:17:31.519 | 70.00th=[ 1549], 80.00th=[ 1827], 90.00th=[ 2114], 95.00th=[ 2343], 00:17:31.519 | 99.00th=[ 3032], 99.50th=[ 3621], 99.90th=[ 4359], 99.95th=[ 4621], 00:17:31.519 | 99.99th=[ 4948] 00:17:31.519 bw ( KiB/s): min=101856, max=151288, per=96.46%, avg=113142.22, stdev=15302.48, samples=9 00:17:31.519 iops : min=25464, max=37822, avg=28285.56, stdev=3825.62, samples=9 00:17:31.519 lat (usec) : 100=0.01%, 250=1.48%, 500=12.19%, 750=14.63%, 1000=14.91% 00:17:31.519 lat (msec) : 2=42.91%, 4=13.62%, 10=0.26% 00:17:31.519 cpu : usr=23.62%, sys=53.68%, ctx=52, majf=0, minf=764 00:17:31.519 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=12.2%, 16=26.0%, 32=53.6%, >=64=1.7% 00:17:31.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:31.519 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:31.519 issued rwts: total=0,146651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:31.519 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:31.519 00:17:31.520 Run status group 0 (all jobs): 00:17:31.520 WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=573MiB (601MB), run=5001-5001msec 00:17:32.087 ----------------------------------------------------- 00:17:32.087 Suppressions used: 00:17:32.087 count bytes template 00:17:32.087 1 11 /usr/src/fio/parse.c 00:17:32.087 1 8 libtcmalloc_minimal.so 00:17:32.087 1 904 libcrypto.so 00:17:32.087 ----------------------------------------------------- 00:17:32.087 00:17:32.087 00:17:32.087 real 0m15.179s 00:17:32.087 user 0m6.538s 00:17:32.087 sys 0m6.046s 00:17:32.087 20:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.087 20:45:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:32.087 ************************************ 00:17:32.087 END TEST xnvme_fio_plugin 00:17:32.087 ************************************ 00:17:32.087 20:45:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:32.087 20:45:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:32.087 20:45:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:32.087 20:45:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:32.087 20:45:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:32.087 20:45:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.087 20:45:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.346 ************************************ 00:17:32.346 START TEST xnvme_rpc 00:17:32.346 ************************************ 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71543 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71543 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71543 ']' 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.346 20:45:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.346 [2024-11-26 20:45:27.229744] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:32.346 [2024-11-26 20:45:27.229922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71543 ] 00:17:32.605 [2024-11-26 20:45:27.420422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.605 [2024-11-26 20:45:27.539681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.540 xnvme_bdev 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.540 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71543 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71543 ']' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71543 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71543 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.799 killing process with pid 71543 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71543' 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71543 00:17:33.799 20:45:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71543 00:17:36.332 00:17:36.332 real 0m4.109s 00:17:36.332 user 0m4.184s 00:17:36.332 sys 0m0.571s 00:17:36.332 20:45:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.332 20:45:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.332 ************************************ 00:17:36.332 END TEST xnvme_rpc 00:17:36.332 ************************************ 00:17:36.332 20:45:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:36.332 20:45:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.332 20:45:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.332 20:45:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:36.332 ************************************ 00:17:36.332 START TEST xnvme_bdevperf 00:17:36.332 ************************************ 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:36.332 20:45:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:36.332 { 00:17:36.332 "subsystems": [ 00:17:36.332 { 00:17:36.332 "subsystem": "bdev", 00:17:36.332 "config": [ 00:17:36.332 { 00:17:36.332 "params": { 00:17:36.332 "io_mechanism": "libaio", 00:17:36.332 "conserve_cpu": true, 00:17:36.332 "filename": "/dev/nvme0n1", 00:17:36.332 "name": "xnvme_bdev" 00:17:36.332 }, 00:17:36.332 "method": "bdev_xnvme_create" 00:17:36.332 }, 00:17:36.332 { 00:17:36.332 "method": "bdev_wait_for_examine" 00:17:36.332 } 00:17:36.332 ] 00:17:36.332 } 00:17:36.332 ] 00:17:36.332 } 00:17:36.590 [2024-11-26 20:45:31.374841] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:36.590 [2024-11-26 20:45:31.375001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71630 ] 00:17:36.590 [2024-11-26 20:45:31.568392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.898 [2024-11-26 20:45:31.686504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.171 Running I/O for 5 seconds... 00:17:39.482 29302.00 IOPS, 114.46 MiB/s [2024-11-26T20:45:35.411Z] 29333.50 IOPS, 114.58 MiB/s [2024-11-26T20:45:36.347Z] 29571.33 IOPS, 115.51 MiB/s [2024-11-26T20:45:37.283Z] 29521.00 IOPS, 115.32 MiB/s 00:17:42.289 Latency(us) 00:17:42.289 [2024-11-26T20:45:37.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.289 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:42.289 xnvme_bdev : 5.00 29474.12 115.13 0.00 0.00 2166.19 261.36 5492.54 00:17:42.289 [2024-11-26T20:45:37.283Z] =================================================================================================================== 00:17:42.289 [2024-11-26T20:45:37.283Z] Total : 29474.12 115.13 0.00 0.00 2166.19 261.36 5492.54 00:17:43.663 20:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:43.663 20:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:43.663 20:45:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:43.663 20:45:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:43.663 20:45:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:43.663 { 00:17:43.663 "subsystems": [ 00:17:43.663 { 00:17:43.663 "subsystem": "bdev", 00:17:43.663 "config": [ 00:17:43.663 { 00:17:43.663 "params": { 00:17:43.663 "io_mechanism": "libaio", 00:17:43.663 "conserve_cpu": true, 00:17:43.663 "filename": "/dev/nvme0n1", 00:17:43.663 "name": "xnvme_bdev" 00:17:43.663 }, 00:17:43.663 "method": "bdev_xnvme_create" 00:17:43.663 }, 00:17:43.663 { 00:17:43.663 "method": "bdev_wait_for_examine" 00:17:43.663 } 00:17:43.663 ] 00:17:43.663 } 00:17:43.663 ] 00:17:43.663 } 00:17:43.663 [2024-11-26 20:45:38.501103] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:17:43.663 [2024-11-26 20:45:38.501283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71709 ] 00:17:43.922 [2024-11-26 20:45:38.686931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.922 [2024-11-26 20:45:38.801532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.212 Running I/O for 5 seconds... 00:17:46.521 31228.00 IOPS, 121.98 MiB/s [2024-11-26T20:45:42.451Z] 29876.50 IOPS, 116.71 MiB/s [2024-11-26T20:45:43.384Z] 31844.33 IOPS, 124.39 MiB/s [2024-11-26T20:45:44.320Z] 31797.25 IOPS, 124.21 MiB/s 00:17:49.326 Latency(us) 00:17:49.326 [2024-11-26T20:45:44.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.326 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:49.326 xnvme_bdev : 5.00 31065.99 121.35 0.00 0.00 2055.90 249.66 5586.16 00:17:49.326 [2024-11-26T20:45:44.320Z] =================================================================================================================== 00:17:49.326 [2024-11-26T20:45:44.320Z] Total : 31065.99 121.35 0.00 0.00 2055.90 249.66 5586.16 00:17:50.701 00:17:50.701 real 0m14.424s 00:17:50.701 user 0m5.465s 00:17:50.701 sys 0m6.320s 00:17:50.701 20:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.702 20:45:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:50.702 ************************************ 00:17:50.702 END TEST xnvme_bdevperf 00:17:50.702 ************************************ 00:17:50.960 20:45:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:50.960 20:45:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:50.960 20:45:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.960 20:45:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:50.960 ************************************ 00:17:50.960 START TEST xnvme_fio_plugin 00:17:50.960 ************************************ 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:50.960 20:45:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:50.960 { 00:17:50.960 "subsystems": [ 00:17:50.960 { 00:17:50.960 "subsystem": "bdev", 00:17:50.960 "config": [ 00:17:50.960 { 00:17:50.960 "params": { 00:17:50.960 "io_mechanism": "libaio", 00:17:50.960 "conserve_cpu": true, 00:17:50.960 "filename": "/dev/nvme0n1", 00:17:50.960 "name": "xnvme_bdev" 00:17:50.960 }, 00:17:50.960 "method": "bdev_xnvme_create" 00:17:50.960 }, 00:17:50.960 { 00:17:50.960 "method": "bdev_wait_for_examine" 00:17:50.960 } 00:17:50.960 ] 00:17:50.960 } 00:17:50.960 ] 00:17:50.960 } 00:17:51.219 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:51.219 fio-3.35 00:17:51.219 Starting 1 thread 00:17:57.791 00:17:57.791 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71841: Tue Nov 26 20:45:52 2024 00:17:57.791 read: IOPS=30.7k, BW=120MiB/s (126MB/s)(601MiB/5001msec) 00:17:57.791 slat (usec): min=5, max=2833, avg=29.01, stdev=29.81 00:17:57.791 clat (usec): min=67, max=5758, avg=1181.95, stdev=695.32 00:17:57.791 lat (usec): min=102, max=5936, avg=1210.96, stdev=699.46 00:17:57.791 clat percentiles (usec): 00:17:57.791 | 1.00th=[ 217], 5.00th=[ 322], 10.00th=[ 420], 20.00th=[ 586], 00:17:57.791 | 30.00th=[ 734], 40.00th=[ 881], 50.00th=[ 1037], 60.00th=[ 1221], 00:17:57.791 | 70.00th=[ 1434], 80.00th=[ 1729], 90.00th=[ 2147], 95.00th=[ 2442], 00:17:57.791 | 99.00th=[ 3425], 99.50th=[ 3884], 99.90th=[ 4555], 99.95th=[ 4752], 00:17:57.791 | 99.99th=[ 5145] 00:17:57.791 bw ( KiB/s): min=99576, max=165496, per=100.00%, avg=126046.22, stdev=18477.48, samples=9 00:17:57.791 iops : min=24894, max=41374, avg=31511.56, stdev=4619.37, samples=9 00:17:57.791 lat (usec) : 100=0.01%, 250=2.00%, 500=12.55%, 750=16.36%, 1000=16.84% 00:17:57.791 lat (msec) : 2=39.40%, 4=12.47%, 10=0.39% 00:17:57.791 cpu : usr=23.60%, sys=53.36%, ctx=71, majf=0, minf=764 00:17:57.791 IO depths : 1=0.1%, 2=1.5%, 4=4.9%, 8=11.5%, 16=25.6%, 32=54.7%, >=64=1.7% 00:17:57.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.791 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:57.791 issued rwts: total=153736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:57.791 00:17:57.791 Run status group 0 (all jobs): 00:17:57.791 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=601MiB (630MB), run=5001-5001msec 00:17:58.726 ----------------------------------------------------- 00:17:58.726 Suppressions used: 00:17:58.726 count bytes template 00:17:58.726 1 11 /usr/src/fio/parse.c 00:17:58.726 1 8 libtcmalloc_minimal.so 00:17:58.726 1 904 libcrypto.so 00:17:58.726 ----------------------------------------------------- 00:17:58.726 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:58.726 20:45:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:58.726 { 00:17:58.726 "subsystems": [ 00:17:58.726 { 00:17:58.726 "subsystem": "bdev", 00:17:58.726 "config": [ 00:17:58.726 { 00:17:58.726 "params": { 00:17:58.726 "io_mechanism": "libaio", 00:17:58.726 "conserve_cpu": true, 00:17:58.726 "filename": "/dev/nvme0n1", 00:17:58.726 "name": "xnvme_bdev" 00:17:58.726 }, 00:17:58.726 "method": "bdev_xnvme_create" 00:17:58.726 }, 00:17:58.726 { 00:17:58.726 "method": "bdev_wait_for_examine" 00:17:58.726 } 00:17:58.726 ] 00:17:58.726 } 00:17:58.726 ] 00:17:58.726 } 00:17:58.984 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:58.984 fio-3.35 00:17:58.984 Starting 1 thread 00:18:05.598 00:18:05.598 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71938: Tue Nov 26 20:45:59 2024 00:18:05.598 write: IOPS=29.2k, BW=114MiB/s (120MB/s)(570MiB/5001msec); 0 zone resets 00:18:05.598 slat (usec): min=5, max=945, avg=30.46, stdev=29.41 00:18:05.598 clat (usec): min=75, max=5421, avg=1245.45, stdev=651.85 00:18:05.598 lat (usec): min=114, max=5496, avg=1275.92, stdev=653.44 00:18:05.598 clat percentiles (usec): 00:18:05.598 | 1.00th=[ 239], 5.00th=[ 347], 10.00th=[ 457], 20.00th=[ 652], 00:18:05.598 | 30.00th=[ 832], 40.00th=[ 996], 50.00th=[ 1156], 60.00th=[ 1336], 00:18:05.598 | 70.00th=[ 1549], 80.00th=[ 1795], 90.00th=[ 2114], 95.00th=[ 2343], 00:18:05.598 | 99.00th=[ 3064], 99.50th=[ 3621], 99.90th=[ 4490], 99.95th=[ 4686], 00:18:05.598 | 99.99th=[ 5014] 00:18:05.598 bw ( KiB/s): min=96936, max=141680, per=100.00%, avg=118242.67, stdev=14962.35, samples=9 00:18:05.598 iops : min=24234, max=35420, avg=29560.67, stdev=3740.59, samples=9 00:18:05.598 lat (usec) : 100=0.01%, 250=1.30%, 500=10.80%, 750=13.11%, 1000=14.87% 00:18:05.598 lat (msec) : 2=46.44%, 4=13.20%, 10=0.29% 00:18:05.598 cpu : usr=23.68%, sys=54.14%, ctx=117, majf=0, minf=764 00:18:05.598 IO depths : 1=0.1%, 2=1.3%, 4=4.9%, 8=11.6%, 16=25.6%, 32=54.7%, >=64=1.7% 00:18:05.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.598 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:05.598 issued rwts: total=0,145932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.598 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:05.598 00:18:05.598 Run status group 0 (all jobs): 00:18:05.598 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=570MiB (598MB), run=5001-5001msec 00:18:06.531 ----------------------------------------------------- 00:18:06.531 Suppressions used: 00:18:06.531 count bytes template 00:18:06.531 1 11 /usr/src/fio/parse.c 00:18:06.531 1 8 libtcmalloc_minimal.so 00:18:06.531 1 904 libcrypto.so 00:18:06.531 ----------------------------------------------------- 00:18:06.531 00:18:06.531 00:18:06.531 real 0m15.473s 00:18:06.531 user 0m6.597s 00:18:06.531 sys 0m6.260s 00:18:06.531 20:46:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.531 ************************************ 00:18:06.531 END TEST xnvme_fio_plugin 00:18:06.531 ************************************ 00:18:06.531 20:46:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:06.531 20:46:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:06.531 20:46:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.531 20:46:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.531 20:46:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.531 ************************************ 00:18:06.531 START TEST xnvme_rpc 00:18:06.531 ************************************ 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72025 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72025 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72025 ']' 00:18:06.531 20:46:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.532 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.532 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.532 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.532 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.532 20:46:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.532 [2024-11-26 20:46:01.448098] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:06.532 [2024-11-26 20:46:01.448268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72025 ] 00:18:06.789 [2024-11-26 20:46:01.628630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.789 [2024-11-26 20:46:01.749029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.723 xnvme_bdev 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.723 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:07.981 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72025 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72025 ']' 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72025 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72025 00:18:07.982 killing process with pid 72025 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72025' 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72025 00:18:07.982 20:46:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72025 00:18:10.514 ************************************ 00:18:10.514 END TEST xnvme_rpc 00:18:10.514 ************************************ 00:18:10.514 00:18:10.514 real 0m4.104s 00:18:10.514 user 0m4.198s 00:18:10.514 sys 0m0.556s 00:18:10.514 20:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.514 20:46:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.514 20:46:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:10.514 20:46:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.514 20:46:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.514 20:46:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:10.514 ************************************ 00:18:10.514 START TEST xnvme_bdevperf 00:18:10.514 ************************************ 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:10.514 20:46:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:10.514 { 00:18:10.514 "subsystems": [ 00:18:10.514 { 00:18:10.514 "subsystem": "bdev", 00:18:10.514 "config": [ 00:18:10.514 { 00:18:10.514 "params": { 00:18:10.514 "io_mechanism": "io_uring", 00:18:10.514 "conserve_cpu": false, 00:18:10.514 "filename": "/dev/nvme0n1", 00:18:10.514 "name": "xnvme_bdev" 00:18:10.514 }, 00:18:10.514 "method": "bdev_xnvme_create" 00:18:10.514 }, 00:18:10.515 { 00:18:10.515 "method": "bdev_wait_for_examine" 00:18:10.515 } 00:18:10.515 ] 00:18:10.515 } 00:18:10.515 ] 00:18:10.515 } 00:18:10.773 [2024-11-26 20:46:05.531431] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:10.773 [2024-11-26 20:46:05.531750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72111 ] 00:18:10.773 [2024-11-26 20:46:05.705471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.031 [2024-11-26 20:46:05.827583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.289 Running I/O for 5 seconds... 00:18:13.217 40725.00 IOPS, 159.08 MiB/s [2024-11-26T20:46:09.609Z] 41053.00 IOPS, 160.36 MiB/s [2024-11-26T20:46:10.541Z] 43399.00 IOPS, 169.53 MiB/s [2024-11-26T20:46:11.473Z] 43903.25 IOPS, 171.50 MiB/s 00:18:16.479 Latency(us) 00:18:16.479 [2024-11-26T20:46:11.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.480 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:16.480 xnvme_bdev : 5.00 44615.19 174.28 0.00 0.00 1429.95 71.19 9799.19 00:18:16.480 [2024-11-26T20:46:11.474Z] =================================================================================================================== 00:18:16.480 [2024-11-26T20:46:11.474Z] Total : 44615.19 174.28 0.00 0.00 1429.95 71.19 9799.19 00:18:17.414 20:46:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:17.414 20:46:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:17.414 20:46:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:17.414 20:46:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:17.414 20:46:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:17.672 { 00:18:17.672 "subsystems": [ 00:18:17.672 { 00:18:17.672 "subsystem": "bdev", 00:18:17.672 "config": [ 00:18:17.672 { 00:18:17.672 "params": { 00:18:17.672 "io_mechanism": "io_uring", 00:18:17.672 "conserve_cpu": false, 00:18:17.672 "filename": "/dev/nvme0n1", 00:18:17.672 "name": "xnvme_bdev" 00:18:17.672 }, 00:18:17.672 "method": "bdev_xnvme_create" 00:18:17.672 }, 00:18:17.672 { 00:18:17.672 "method": "bdev_wait_for_examine" 00:18:17.672 } 00:18:17.672 ] 00:18:17.672 } 00:18:17.672 ] 00:18:17.672 } 00:18:17.672 [2024-11-26 20:46:12.485012] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:17.672 [2024-11-26 20:46:12.485199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72192 ] 00:18:17.930 [2024-11-26 20:46:12.671391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.930 [2024-11-26 20:46:12.792924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.188 Running I/O for 5 seconds... 00:18:20.497 37120.00 IOPS, 145.00 MiB/s [2024-11-26T20:46:16.424Z] 39328.00 IOPS, 153.62 MiB/s [2024-11-26T20:46:17.360Z] 39082.67 IOPS, 152.67 MiB/s [2024-11-26T20:46:18.293Z] 38832.50 IOPS, 151.69 MiB/s [2024-11-26T20:46:18.293Z] 38298.80 IOPS, 149.60 MiB/s 00:18:23.299 Latency(us) 00:18:23.299 [2024-11-26T20:46:18.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.299 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:23.299 xnvme_bdev : 5.01 38245.16 149.40 0.00 0.00 1667.20 889.42 5835.82 00:18:23.299 [2024-11-26T20:46:18.293Z] =================================================================================================================== 00:18:23.299 [2024-11-26T20:46:18.293Z] Total : 38245.16 149.40 0.00 0.00 1667.20 889.42 5835.82 00:18:24.674 00:18:24.674 real 0m14.041s 00:18:24.674 user 0m7.091s 00:18:24.674 sys 0m6.718s 00:18:24.674 20:46:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.674 ************************************ 00:18:24.674 END TEST xnvme_bdevperf 00:18:24.674 ************************************ 00:18:24.674 20:46:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.674 20:46:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:24.674 20:46:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.674 20:46:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.674 20:46:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.674 ************************************ 00:18:24.674 START TEST xnvme_fio_plugin 00:18:24.674 ************************************ 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.674 { 00:18:24.674 "subsystems": [ 00:18:24.674 { 00:18:24.674 "subsystem": "bdev", 00:18:24.674 "config": [ 00:18:24.674 { 00:18:24.674 "params": { 00:18:24.674 "io_mechanism": "io_uring", 00:18:24.674 "conserve_cpu": false, 00:18:24.674 "filename": "/dev/nvme0n1", 00:18:24.674 "name": "xnvme_bdev" 00:18:24.674 }, 00:18:24.674 "method": "bdev_xnvme_create" 00:18:24.674 }, 00:18:24.674 { 00:18:24.674 "method": "bdev_wait_for_examine" 00:18:24.674 } 00:18:24.674 ] 00:18:24.674 } 00:18:24.674 ] 00:18:24.674 } 00:18:24.674 20:46:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:24.933 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:24.933 fio-3.35 00:18:24.933 Starting 1 thread 00:18:31.495 00:18:31.495 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72315: Tue Nov 26 20:46:25 2024 00:18:31.495 read: IOPS=52.0k, BW=203MiB/s (213MB/s)(1016MiB/5001msec) 00:18:31.495 slat (nsec): min=2746, max=59932, avg=3368.04, stdev=1007.67 00:18:31.495 clat (usec): min=727, max=3396, avg=1095.97, stdev=145.77 00:18:31.495 lat (usec): min=730, max=3399, avg=1099.33, stdev=146.05 00:18:31.495 clat percentiles (usec): 00:18:31.495 | 1.00th=[ 848], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 971], 00:18:31.495 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[ 1090], 60.00th=[ 1123], 00:18:31.495 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1270], 95.00th=[ 1319], 00:18:31.495 | 99.00th=[ 1565], 99.50th=[ 1647], 99.90th=[ 1860], 99.95th=[ 1958], 00:18:31.495 | 99.99th=[ 2212] 00:18:31.495 bw ( KiB/s): min=179712, max=227328, per=99.27%, avg=206503.11, stdev=18064.83, samples=9 00:18:31.495 iops : min=44928, max=56832, avg=51625.78, stdev=4516.21, samples=9 00:18:31.495 lat (usec) : 750=0.01%, 1000=26.96% 00:18:31.495 lat (msec) : 2=73.00%, 4=0.04% 00:18:31.495 cpu : usr=32.18%, sys=67.00%, ctx=14, majf=0, minf=762 00:18:31.495 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:31.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.495 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:31.495 issued rwts: total=260082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.495 00:18:31.495 Run status group 0 (all jobs): 00:18:31.495 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1016MiB (1065MB), run=5001-5001msec 00:18:32.441 ----------------------------------------------------- 00:18:32.441 Suppressions used: 00:18:32.441 count bytes template 00:18:32.441 1 11 /usr/src/fio/parse.c 00:18:32.441 1 8 libtcmalloc_minimal.so 00:18:32.441 1 904 libcrypto.so 00:18:32.441 ----------------------------------------------------- 00:18:32.441 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.441 20:46:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.441 { 00:18:32.441 "subsystems": [ 00:18:32.441 { 00:18:32.441 "subsystem": "bdev", 00:18:32.441 "config": [ 00:18:32.441 { 00:18:32.441 "params": { 00:18:32.441 "io_mechanism": "io_uring", 00:18:32.441 "conserve_cpu": false, 00:18:32.441 "filename": "/dev/nvme0n1", 00:18:32.441 "name": "xnvme_bdev" 00:18:32.441 }, 00:18:32.441 "method": "bdev_xnvme_create" 00:18:32.441 }, 00:18:32.441 { 00:18:32.441 "method": "bdev_wait_for_examine" 00:18:32.441 } 00:18:32.441 ] 00:18:32.441 } 00:18:32.441 ] 00:18:32.441 } 00:18:32.700 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:32.700 fio-3.35 00:18:32.700 Starting 1 thread 00:18:39.256 00:18:39.256 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72418: Tue Nov 26 20:46:33 2024 00:18:39.256 write: IOPS=45.4k, BW=178MiB/s (186MB/s)(888MiB/5002msec); 0 zone resets 00:18:39.256 slat (usec): min=2, max=114, avg= 4.42, stdev= 1.84 00:18:39.256 clat (usec): min=781, max=3379, avg=1233.54, stdev=198.05 00:18:39.256 lat (usec): min=784, max=3391, avg=1237.95, stdev=198.88 00:18:39.256 clat percentiles (usec): 00:18:39.257 | 1.00th=[ 898], 5.00th=[ 988], 10.00th=[ 1029], 20.00th=[ 1090], 00:18:39.257 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1254], 00:18:39.257 | 70.00th=[ 1287], 80.00th=[ 1352], 90.00th=[ 1434], 95.00th=[ 1598], 00:18:39.257 | 99.00th=[ 1926], 99.50th=[ 2008], 99.90th=[ 2737], 99.95th=[ 2999], 00:18:39.257 | 99.99th=[ 3228] 00:18:39.257 bw ( KiB/s): min=169984, max=201496, per=100.00%, avg=183238.22, stdev=10877.60, samples=9 00:18:39.257 iops : min=42496, max=50376, avg=45809.56, stdev=2719.46, samples=9 00:18:39.257 lat (usec) : 1000=6.20% 00:18:39.257 lat (msec) : 2=93.24%, 4=0.56% 00:18:39.257 cpu : usr=34.31%, sys=64.75%, ctx=12, majf=0, minf=762 00:18:39.257 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:39.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.257 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:39.257 issued rwts: total=0,227328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.257 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.257 00:18:39.257 Run status group 0 (all jobs): 00:18:39.257 WRITE: bw=178MiB/s (186MB/s), 178MiB/s-178MiB/s (186MB/s-186MB/s), io=888MiB (931MB), run=5002-5002msec 00:18:40.190 ----------------------------------------------------- 00:18:40.190 Suppressions used: 00:18:40.190 count bytes template 00:18:40.190 1 11 /usr/src/fio/parse.c 00:18:40.190 1 8 libtcmalloc_minimal.so 00:18:40.190 1 904 libcrypto.so 00:18:40.190 ----------------------------------------------------- 00:18:40.190 00:18:40.190 00:18:40.190 real 0m15.309s 00:18:40.190 user 0m7.528s 00:18:40.190 sys 0m7.408s 00:18:40.190 20:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.190 ************************************ 00:18:40.190 END TEST xnvme_fio_plugin 00:18:40.190 ************************************ 00:18:40.190 20:46:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:40.190 20:46:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:40.190 20:46:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:40.190 20:46:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:40.190 20:46:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:40.190 20:46:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.190 20:46:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.190 20:46:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.190 ************************************ 00:18:40.190 START TEST xnvme_rpc 00:18:40.190 ************************************ 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72504 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72504 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72504 ']' 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.190 20:46:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.190 [2024-11-26 20:46:35.028889] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:40.190 [2024-11-26 20:46:35.029217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72504 ] 00:18:40.448 [2024-11-26 20:46:35.197528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.448 [2024-11-26 20:46:35.314774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.384 xnvme_bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.384 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72504 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72504 ']' 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72504 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.643 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72504 00:18:41.643 killing process with pid 72504 00:18:41.644 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.644 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.644 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72504' 00:18:41.644 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72504 00:18:41.644 20:46:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72504 00:18:44.177 00:18:44.177 real 0m3.956s 00:18:44.177 user 0m4.051s 00:18:44.177 sys 0m0.507s 00:18:44.177 20:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.177 ************************************ 00:18:44.177 END TEST xnvme_rpc 00:18:44.177 ************************************ 00:18:44.177 20:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:44.177 20:46:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:44.177 20:46:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:44.177 20:46:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.177 20:46:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:44.177 ************************************ 00:18:44.177 START TEST xnvme_bdevperf 00:18:44.177 ************************************ 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:44.177 20:46:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:44.177 { 00:18:44.177 "subsystems": [ 00:18:44.177 { 00:18:44.177 "subsystem": "bdev", 00:18:44.177 "config": [ 00:18:44.177 { 00:18:44.177 "params": { 00:18:44.177 "io_mechanism": "io_uring", 00:18:44.177 "conserve_cpu": true, 00:18:44.177 "filename": "/dev/nvme0n1", 00:18:44.177 "name": "xnvme_bdev" 00:18:44.177 }, 00:18:44.177 "method": "bdev_xnvme_create" 00:18:44.177 }, 00:18:44.177 { 00:18:44.177 "method": "bdev_wait_for_examine" 00:18:44.177 } 00:18:44.177 ] 00:18:44.177 } 00:18:44.177 ] 00:18:44.177 } 00:18:44.177 [2024-11-26 20:46:39.037068] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:44.177 [2024-11-26 20:46:39.037196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72588 ] 00:18:44.435 [2024-11-26 20:46:39.209790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.435 [2024-11-26 20:46:39.329679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.001 Running I/O for 5 seconds... 00:18:46.870 47804.00 IOPS, 186.73 MiB/s [2024-11-26T20:46:42.799Z] 46814.00 IOPS, 182.87 MiB/s [2024-11-26T20:46:43.741Z] 48084.00 IOPS, 187.83 MiB/s [2024-11-26T20:46:45.115Z] 49903.00 IOPS, 194.93 MiB/s 00:18:50.121 Latency(us) 00:18:50.121 [2024-11-26T20:46:45.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.121 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:50.121 xnvme_bdev : 5.00 49531.66 193.48 0.00 0.00 1288.17 834.80 4587.52 00:18:50.121 [2024-11-26T20:46:45.115Z] =================================================================================================================== 00:18:50.121 [2024-11-26T20:46:45.115Z] Total : 49531.66 193.48 0.00 0.00 1288.17 834.80 4587.52 00:18:51.057 20:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:51.057 20:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:51.057 20:46:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:51.057 20:46:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:51.057 20:46:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:51.057 { 00:18:51.057 "subsystems": [ 00:18:51.057 { 00:18:51.057 "subsystem": "bdev", 00:18:51.057 "config": [ 00:18:51.057 { 00:18:51.057 "params": { 00:18:51.057 "io_mechanism": "io_uring", 00:18:51.057 "conserve_cpu": true, 00:18:51.057 "filename": "/dev/nvme0n1", 00:18:51.057 "name": "xnvme_bdev" 00:18:51.057 }, 00:18:51.057 "method": "bdev_xnvme_create" 00:18:51.057 }, 00:18:51.057 { 00:18:51.057 "method": "bdev_wait_for_examine" 00:18:51.057 } 00:18:51.057 ] 00:18:51.057 } 00:18:51.057 ] 00:18:51.057 } 00:18:51.057 [2024-11-26 20:46:45.983193] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:18:51.057 [2024-11-26 20:46:45.983367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72669 ] 00:18:51.314 [2024-11-26 20:46:46.176946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.314 [2024-11-26 20:46:46.295591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.880 Running I/O for 5 seconds... 00:18:53.745 39013.00 IOPS, 152.39 MiB/s [2024-11-26T20:46:49.675Z] 34703.00 IOPS, 135.56 MiB/s [2024-11-26T20:46:51.047Z] 30317.00 IOPS, 118.43 MiB/s [2024-11-26T20:46:51.983Z] 30414.50 IOPS, 118.81 MiB/s [2024-11-26T20:46:51.983Z] 33359.00 IOPS, 130.31 MiB/s 00:18:56.989 Latency(us) 00:18:56.989 [2024-11-26T20:46:51.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.989 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:56.989 xnvme_bdev : 5.00 33355.24 130.29 0.00 0.00 1913.26 55.83 28586.18 00:18:56.989 [2024-11-26T20:46:51.983Z] =================================================================================================================== 00:18:56.989 [2024-11-26T20:46:51.983Z] Total : 33355.24 130.29 0.00 0.00 1913.26 55.83 28586.18 00:18:57.946 00:18:57.946 real 0m13.896s 00:18:57.946 user 0m7.420s 00:18:57.946 sys 0m5.178s 00:18:57.946 20:46:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.946 20:46:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.946 ************************************ 00:18:57.946 END TEST xnvme_bdevperf 00:18:57.946 ************************************ 00:18:57.946 20:46:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:57.946 20:46:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:57.946 20:46:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.946 20:46:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.946 ************************************ 00:18:57.946 START TEST xnvme_fio_plugin 00:18:57.946 ************************************ 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:57.946 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:58.205 { 00:18:58.205 "subsystems": [ 00:18:58.205 { 00:18:58.205 "subsystem": "bdev", 00:18:58.205 "config": [ 00:18:58.205 { 00:18:58.205 "params": { 00:18:58.205 "io_mechanism": "io_uring", 00:18:58.205 "conserve_cpu": true, 00:18:58.205 "filename": "/dev/nvme0n1", 00:18:58.205 "name": "xnvme_bdev" 00:18:58.205 }, 00:18:58.205 "method": "bdev_xnvme_create" 00:18:58.205 }, 00:18:58.205 { 00:18:58.205 "method": "bdev_wait_for_examine" 00:18:58.205 } 00:18:58.205 ] 00:18:58.205 } 00:18:58.205 ] 00:18:58.205 } 00:18:58.205 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:58.205 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:58.205 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:58.205 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:58.205 20:46:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:58.205 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:58.205 fio-3.35 00:18:58.205 Starting 1 thread 00:19:04.764 00:19:04.764 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72795: Tue Nov 26 20:46:58 2024 00:19:04.764 read: IOPS=48.0k, BW=187MiB/s (197MB/s)(938MiB/5001msec) 00:19:04.764 slat (usec): min=2, max=670, avg= 3.70, stdev= 1.71 00:19:04.764 clat (usec): min=824, max=2596, avg=1186.74, stdev=128.68 00:19:04.764 lat (usec): min=827, max=2622, avg=1190.44, stdev=128.92 00:19:04.764 clat percentiles (usec): 00:19:04.764 | 1.00th=[ 938], 5.00th=[ 1004], 10.00th=[ 1037], 20.00th=[ 1090], 00:19:04.764 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1205], 00:19:04.764 | 70.00th=[ 1237], 80.00th=[ 1287], 90.00th=[ 1336], 95.00th=[ 1385], 00:19:04.764 | 99.00th=[ 1582], 99.50th=[ 1762], 99.90th=[ 2024], 99.95th=[ 2089], 00:19:04.764 | 99.99th=[ 2474] 00:19:04.764 bw ( KiB/s): min=184320, max=204800, per=99.84%, avg=191658.67, stdev=6845.31, samples=9 00:19:04.764 iops : min=46080, max=51200, avg=47914.67, stdev=1711.33, samples=9 00:19:04.764 lat (usec) : 1000=4.72% 00:19:04.764 lat (msec) : 2=95.17%, 4=0.11% 00:19:04.764 cpu : usr=35.56%, sys=61.00%, ctx=12, majf=0, minf=762 00:19:04.764 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:04.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.764 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:04.764 issued rwts: total=240000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.764 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.764 00:19:04.764 Run status group 0 (all jobs): 00:19:04.764 READ: bw=187MiB/s (197MB/s), 187MiB/s-187MiB/s (197MB/s-197MB/s), io=938MiB (983MB), run=5001-5001msec 00:19:05.699 ----------------------------------------------------- 00:19:05.699 Suppressions used: 00:19:05.699 count bytes template 00:19:05.699 1 11 /usr/src/fio/parse.c 00:19:05.699 1 8 libtcmalloc_minimal.so 00:19:05.699 1 904 libcrypto.so 00:19:05.699 ----------------------------------------------------- 00:19:05.699 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.699 20:47:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:05.699 { 00:19:05.699 "subsystems": [ 00:19:05.699 { 00:19:05.699 "subsystem": "bdev", 00:19:05.699 "config": [ 00:19:05.699 { 00:19:05.699 "params": { 00:19:05.699 "io_mechanism": "io_uring", 00:19:05.699 "conserve_cpu": true, 00:19:05.699 "filename": "/dev/nvme0n1", 00:19:05.699 "name": "xnvme_bdev" 00:19:05.699 }, 00:19:05.699 "method": "bdev_xnvme_create" 00:19:05.699 }, 00:19:05.699 { 00:19:05.699 "method": "bdev_wait_for_examine" 00:19:05.699 } 00:19:05.699 ] 00:19:05.699 } 00:19:05.699 ] 00:19:05.699 } 00:19:05.957 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:05.957 fio-3.35 00:19:05.957 Starting 1 thread 00:19:12.518 00:19:12.518 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72891: Tue Nov 26 20:47:06 2024 00:19:12.518 write: IOPS=46.0k, BW=180MiB/s (189MB/s)(899MiB/5002msec); 0 zone resets 00:19:12.518 slat (usec): min=2, max=100, avg= 4.47, stdev= 1.51 00:19:12.518 clat (usec): min=568, max=2452, avg=1214.66, stdev=166.75 00:19:12.518 lat (usec): min=572, max=2478, avg=1219.12, stdev=167.28 00:19:12.518 clat percentiles (usec): 00:19:12.518 | 1.00th=[ 930], 5.00th=[ 1004], 10.00th=[ 1037], 20.00th=[ 1090], 00:19:12.518 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1237], 00:19:12.518 | 70.00th=[ 1270], 80.00th=[ 1319], 90.00th=[ 1401], 95.00th=[ 1516], 00:19:12.518 | 99.00th=[ 1811], 99.50th=[ 1926], 99.90th=[ 2114], 99.95th=[ 2212], 00:19:12.518 | 99.99th=[ 2376] 00:19:12.518 bw ( KiB/s): min=176128, max=194048, per=100.00%, avg=184225.80, stdev=6712.58, samples=10 00:19:12.518 iops : min=44032, max=48512, avg=46056.30, stdev=1678.16, samples=10 00:19:12.518 lat (usec) : 750=0.01%, 1000=4.89% 00:19:12.518 lat (msec) : 2=94.85%, 4=0.26% 00:19:12.518 cpu : usr=36.57%, sys=59.75%, ctx=11, majf=0, minf=762 00:19:12.518 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:12.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.518 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:12.518 issued rwts: total=0,230208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.518 00:19:12.518 Run status group 0 (all jobs): 00:19:12.518 WRITE: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=899MiB (943MB), run=5002-5002msec 00:19:13.093 ----------------------------------------------------- 00:19:13.093 Suppressions used: 00:19:13.093 count bytes template 00:19:13.093 1 11 /usr/src/fio/parse.c 00:19:13.093 1 8 libtcmalloc_minimal.so 00:19:13.093 1 904 libcrypto.so 00:19:13.093 ----------------------------------------------------- 00:19:13.093 00:19:13.093 00:19:13.093 real 0m15.116s 00:19:13.093 user 0m7.635s 00:19:13.093 sys 0m6.842s 00:19:13.093 ************************************ 00:19:13.093 END TEST xnvme_fio_plugin 00:19:13.093 ************************************ 00:19:13.093 20:47:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.093 20:47:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:13.387 20:47:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:13.387 20:47:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:13.387 20:47:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.387 20:47:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.387 ************************************ 00:19:13.387 START TEST xnvme_rpc 00:19:13.387 ************************************ 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72979 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72979 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72979 ']' 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.387 20:47:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.387 [2024-11-26 20:47:08.201465] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:13.387 [2024-11-26 20:47:08.201595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72979 ] 00:19:13.387 [2024-11-26 20:47:08.374657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.646 [2024-11-26 20:47:08.495008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.583 xnvme_bdev 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.583 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72979 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72979 ']' 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72979 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.584 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72979 00:19:14.841 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.841 killing process with pid 72979 00:19:14.841 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.841 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72979' 00:19:14.841 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72979 00:19:14.841 20:47:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72979 00:19:17.373 00:19:17.373 real 0m3.985s 00:19:17.373 user 0m4.045s 00:19:17.373 sys 0m0.538s 00:19:17.373 20:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.373 20:47:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:17.373 ************************************ 00:19:17.373 END TEST xnvme_rpc 00:19:17.373 ************************************ 00:19:17.373 20:47:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:17.373 20:47:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.373 20:47:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.373 20:47:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.373 ************************************ 00:19:17.373 START TEST xnvme_bdevperf 00:19:17.373 ************************************ 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:17.373 20:47:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:17.373 { 00:19:17.373 "subsystems": [ 00:19:17.373 { 00:19:17.373 "subsystem": "bdev", 00:19:17.373 "config": [ 00:19:17.373 { 00:19:17.373 "params": { 00:19:17.373 "io_mechanism": "io_uring_cmd", 00:19:17.373 "conserve_cpu": false, 00:19:17.373 "filename": "/dev/ng0n1", 00:19:17.373 "name": "xnvme_bdev" 00:19:17.373 }, 00:19:17.373 "method": "bdev_xnvme_create" 00:19:17.373 }, 00:19:17.373 { 00:19:17.373 "method": "bdev_wait_for_examine" 00:19:17.373 } 00:19:17.373 ] 00:19:17.373 } 00:19:17.373 ] 00:19:17.373 } 00:19:17.373 [2024-11-26 20:47:12.261215] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:17.373 [2024-11-26 20:47:12.261641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73064 ] 00:19:17.631 [2024-11-26 20:47:12.454211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.631 [2024-11-26 20:47:12.572775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.199 Running I/O for 5 seconds... 00:19:20.073 50304.00 IOPS, 196.50 MiB/s [2024-11-26T20:47:16.000Z] 49184.00 IOPS, 192.12 MiB/s [2024-11-26T20:47:16.933Z] 50090.00 IOPS, 195.66 MiB/s [2024-11-26T20:47:18.306Z] 50559.50 IOPS, 197.50 MiB/s 00:19:23.312 Latency(us) 00:19:23.312 [2024-11-26T20:47:18.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.312 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:23.312 xnvme_bdev : 5.00 50389.79 196.84 0.00 0.00 1266.11 830.90 3635.69 00:19:23.312 [2024-11-26T20:47:18.306Z] =================================================================================================================== 00:19:23.312 [2024-11-26T20:47:18.306Z] Total : 50389.79 196.84 0.00 0.00 1266.11 830.90 3635.69 00:19:24.245 20:47:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:24.245 20:47:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:24.245 20:47:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:24.245 20:47:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:24.245 20:47:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:24.245 { 00:19:24.245 "subsystems": [ 00:19:24.245 { 00:19:24.245 "subsystem": "bdev", 00:19:24.245 "config": [ 00:19:24.245 { 00:19:24.245 "params": { 00:19:24.245 "io_mechanism": "io_uring_cmd", 00:19:24.245 "conserve_cpu": false, 00:19:24.245 "filename": "/dev/ng0n1", 00:19:24.245 "name": "xnvme_bdev" 00:19:24.245 }, 00:19:24.245 "method": "bdev_xnvme_create" 00:19:24.245 }, 00:19:24.245 { 00:19:24.245 "method": "bdev_wait_for_examine" 00:19:24.245 } 00:19:24.245 ] 00:19:24.245 } 00:19:24.245 ] 00:19:24.245 } 00:19:24.245 [2024-11-26 20:47:19.178447] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:24.245 [2024-11-26 20:47:19.178573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73138 ] 00:19:24.502 [2024-11-26 20:47:19.363316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.760 [2024-11-26 20:47:19.536854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.017 Running I/O for 5 seconds... 00:19:26.987 47680.00 IOPS, 186.25 MiB/s [2024-11-26T20:47:23.356Z] 45760.00 IOPS, 178.75 MiB/s [2024-11-26T20:47:24.291Z] 45930.67 IOPS, 179.42 MiB/s [2024-11-26T20:47:25.226Z] 46592.00 IOPS, 182.00 MiB/s 00:19:30.232 Latency(us) 00:19:30.232 [2024-11-26T20:47:25.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.232 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:30.232 xnvme_bdev : 5.00 46816.58 182.88 0.00 0.00 1362.28 877.71 5586.16 00:19:30.232 [2024-11-26T20:47:25.226Z] =================================================================================================================== 00:19:30.232 [2024-11-26T20:47:25.226Z] Total : 46816.58 182.88 0.00 0.00 1362.28 877.71 5586.16 00:19:31.608 20:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:31.608 20:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:31.608 20:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:31.608 20:47:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:31.608 20:47:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:31.608 { 00:19:31.608 "subsystems": [ 00:19:31.608 { 00:19:31.608 "subsystem": "bdev", 00:19:31.608 "config": [ 00:19:31.608 { 00:19:31.608 "params": { 00:19:31.608 "io_mechanism": "io_uring_cmd", 00:19:31.608 "conserve_cpu": false, 00:19:31.608 "filename": "/dev/ng0n1", 00:19:31.608 "name": "xnvme_bdev" 00:19:31.608 }, 00:19:31.608 "method": "bdev_xnvme_create" 00:19:31.608 }, 00:19:31.608 { 00:19:31.608 "method": "bdev_wait_for_examine" 00:19:31.608 } 00:19:31.608 ] 00:19:31.608 } 00:19:31.608 ] 00:19:31.608 } 00:19:31.608 [2024-11-26 20:47:26.320708] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:31.608 [2024-11-26 20:47:26.320882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:19:31.608 [2024-11-26 20:47:26.508935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.866 [2024-11-26 20:47:26.625772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.124 Running I/O for 5 seconds... 00:19:34.010 97024.00 IOPS, 379.00 MiB/s [2024-11-26T20:47:30.380Z] 96960.00 IOPS, 378.75 MiB/s [2024-11-26T20:47:31.316Z] 96938.67 IOPS, 378.67 MiB/s [2024-11-26T20:47:32.249Z] 95104.00 IOPS, 371.50 MiB/s 00:19:37.255 Latency(us) 00:19:37.255 [2024-11-26T20:47:32.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.255 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:37.255 xnvme_bdev : 5.00 95565.27 373.30 0.00 0.00 666.79 436.91 2449.80 00:19:37.255 [2024-11-26T20:47:32.249Z] =================================================================================================================== 00:19:37.255 [2024-11-26T20:47:32.249Z] Total : 95565.27 373.30 0.00 0.00 666.79 436.91 2449.80 00:19:38.192 20:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:38.192 20:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:38.192 20:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:38.192 20:47:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:38.192 20:47:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:38.451 { 00:19:38.451 "subsystems": [ 00:19:38.451 { 00:19:38.451 "subsystem": "bdev", 00:19:38.451 "config": [ 00:19:38.451 { 00:19:38.451 "params": { 00:19:38.451 "io_mechanism": "io_uring_cmd", 00:19:38.451 "conserve_cpu": false, 00:19:38.451 "filename": "/dev/ng0n1", 00:19:38.451 "name": "xnvme_bdev" 00:19:38.451 }, 00:19:38.451 "method": "bdev_xnvme_create" 00:19:38.451 }, 00:19:38.451 { 00:19:38.451 "method": "bdev_wait_for_examine" 00:19:38.451 } 00:19:38.451 ] 00:19:38.451 } 00:19:38.451 ] 00:19:38.451 } 00:19:38.451 [2024-11-26 20:47:33.285096] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:19:38.451 [2024-11-26 20:47:33.285275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73298 ] 00:19:38.710 [2024-11-26 20:47:33.476683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.710 [2024-11-26 20:47:33.594114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.969 Running I/O for 5 seconds... 00:19:41.279 26986.00 IOPS, 105.41 MiB/s [2024-11-26T20:47:37.207Z] 18176.50 IOPS, 71.00 MiB/s [2024-11-26T20:47:38.138Z] 18816.33 IOPS, 73.50 MiB/s [2024-11-26T20:47:39.071Z] 26917.25 IOPS, 105.15 MiB/s [2024-11-26T20:47:39.071Z] 31652.20 IOPS, 123.64 MiB/s 00:19:44.077 Latency(us) 00:19:44.077 [2024-11-26T20:47:39.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.077 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:44.077 xnvme_bdev : 5.00 31639.01 123.59 0.00 0.00 2018.63 79.97 27712.37 00:19:44.077 [2024-11-26T20:47:39.071Z] =================================================================================================================== 00:19:44.077 [2024-11-26T20:47:39.071Z] Total : 31639.01 123.59 0.00 0.00 2018.63 79.97 27712.37 00:19:45.456 00:19:45.456 real 0m28.148s 00:19:45.456 user 0m14.641s 00:19:45.456 sys 0m13.117s 00:19:45.456 20:47:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.456 20:47:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:45.456 ************************************ 00:19:45.456 END TEST xnvme_bdevperf 00:19:45.456 ************************************ 00:19:45.456 20:47:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:45.456 20:47:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:45.456 20:47:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.456 20:47:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:45.456 ************************************ 00:19:45.456 START TEST xnvme_fio_plugin 00:19:45.456 ************************************ 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:45.456 20:47:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:45.456 { 00:19:45.456 "subsystems": [ 00:19:45.456 { 00:19:45.456 "subsystem": "bdev", 00:19:45.456 "config": [ 00:19:45.456 { 00:19:45.456 "params": { 00:19:45.456 "io_mechanism": "io_uring_cmd", 00:19:45.456 "conserve_cpu": false, 00:19:45.456 "filename": "/dev/ng0n1", 00:19:45.456 "name": "xnvme_bdev" 00:19:45.456 }, 00:19:45.456 "method": "bdev_xnvme_create" 00:19:45.456 }, 00:19:45.456 { 00:19:45.456 "method": "bdev_wait_for_examine" 00:19:45.456 } 00:19:45.456 ] 00:19:45.457 } 00:19:45.457 ] 00:19:45.457 } 00:19:45.715 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:45.715 fio-3.35 00:19:45.715 Starting 1 thread 00:19:52.314 00:19:52.314 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73422: Tue Nov 26 20:47:46 2024 00:19:52.314 read: IOPS=51.6k, BW=202MiB/s (211MB/s)(1008MiB/5001msec) 00:19:52.314 slat (usec): min=2, max=262, avg= 3.78, stdev= 1.70 00:19:52.314 clat (usec): min=550, max=1966, avg=1091.44, stdev=134.27 00:19:52.314 lat (usec): min=554, max=1994, avg=1095.22, stdev=134.68 00:19:52.314 clat percentiles (usec): 00:19:52.314 | 1.00th=[ 840], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 979], 00:19:52.314 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1123], 00:19:52.314 | 70.00th=[ 1156], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1303], 00:19:52.314 | 99.00th=[ 1483], 99.50th=[ 1582], 99.90th=[ 1778], 99.95th=[ 1827], 00:19:52.314 | 99.99th=[ 1893] 00:19:52.314 bw ( KiB/s): min=183808, max=222720, per=100.00%, avg=207616.00, stdev=13087.33, samples=9 00:19:52.314 iops : min=45952, max=55680, avg=51904.00, stdev=3271.83, samples=9 00:19:52.314 lat (usec) : 750=0.02%, 1000=25.57% 00:19:52.314 lat (msec) : 2=74.40% 00:19:52.314 cpu : usr=33.38%, sys=65.52%, ctx=55, majf=0, minf=762 00:19:52.314 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:52.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.314 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:52.314 issued rwts: total=258016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.314 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:52.314 00:19:52.314 Run status group 0 (all jobs): 00:19:52.314 READ: bw=202MiB/s (211MB/s), 202MiB/s-202MiB/s (211MB/s-211MB/s), io=1008MiB (1057MB), run=5001-5001msec 00:19:52.881 ----------------------------------------------------- 00:19:52.881 Suppressions used: 00:19:52.881 count bytes template 00:19:52.881 1 11 /usr/src/fio/parse.c 00:19:52.881 1 8 libtcmalloc_minimal.so 00:19:52.881 1 904 libcrypto.so 00:19:52.881 ----------------------------------------------------- 00:19:52.881 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:53.139 20:47:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:53.139 { 00:19:53.139 "subsystems": [ 00:19:53.139 { 00:19:53.139 "subsystem": "bdev", 00:19:53.139 "config": [ 00:19:53.139 { 00:19:53.139 "params": { 00:19:53.139 "io_mechanism": "io_uring_cmd", 00:19:53.139 "conserve_cpu": false, 00:19:53.139 "filename": "/dev/ng0n1", 00:19:53.139 "name": "xnvme_bdev" 00:19:53.139 }, 00:19:53.140 "method": "bdev_xnvme_create" 00:19:53.140 }, 00:19:53.140 { 00:19:53.140 "method": "bdev_wait_for_examine" 00:19:53.140 } 00:19:53.140 ] 00:19:53.140 } 00:19:53.140 ] 00:19:53.140 } 00:19:53.140 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:53.140 fio-3.35 00:19:53.140 Starting 1 thread 00:19:59.700 00:19:59.700 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73513: Tue Nov 26 20:47:53 2024 00:19:59.700 write: IOPS=46.4k, BW=181MiB/s (190MB/s)(906MiB/5002msec); 0 zone resets 00:19:59.700 slat (nsec): min=2887, max=94197, avg=4617.29, stdev=1760.79 00:19:59.700 clat (usec): min=817, max=3054, avg=1199.43, stdev=186.71 00:19:59.700 lat (usec): min=820, max=3087, avg=1204.05, stdev=187.54 00:19:59.700 clat percentiles (usec): 00:19:59.700 | 1.00th=[ 914], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1057], 00:19:59.700 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:19:59.700 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1401], 95.00th=[ 1565], 00:19:59.700 | 99.00th=[ 1860], 99.50th=[ 1926], 99.90th=[ 2212], 99.95th=[ 2474], 00:19:59.700 | 99.99th=[ 2868] 00:19:59.700 bw ( KiB/s): min=173221, max=196096, per=100.00%, avg=185703.67, stdev=7375.72, samples=9 00:19:59.700 iops : min=43305, max=49024, avg=46425.89, stdev=1843.98, samples=9 00:19:59.700 lat (usec) : 1000=9.80% 00:19:59.700 lat (msec) : 2=89.91%, 4=0.29% 00:19:59.700 cpu : usr=36.19%, sys=62.89%, ctx=11, majf=0, minf=762 00:19:59.700 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:59.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.700 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:59.700 issued rwts: total=0,231872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:59.700 00:19:59.700 Run status group 0 (all jobs): 00:19:59.700 WRITE: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=906MiB (950MB), run=5002-5002msec 00:20:00.636 ----------------------------------------------------- 00:20:00.636 Suppressions used: 00:20:00.636 count bytes template 00:20:00.636 1 11 /usr/src/fio/parse.c 00:20:00.636 1 8 libtcmalloc_minimal.so 00:20:00.636 1 904 libcrypto.so 00:20:00.636 ----------------------------------------------------- 00:20:00.636 00:20:00.636 00:20:00.636 real 0m15.043s 00:20:00.636 user 0m7.506s 00:20:00.636 sys 0m7.181s 00:20:00.636 20:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.636 ************************************ 00:20:00.636 20:47:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:00.636 END TEST xnvme_fio_plugin 00:20:00.636 ************************************ 00:20:00.636 20:47:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:00.636 20:47:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:00.636 20:47:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:00.636 20:47:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:00.636 20:47:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:00.636 20:47:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.636 20:47:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:00.636 ************************************ 00:20:00.636 START TEST xnvme_rpc 00:20:00.636 ************************************ 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73604 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73604 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73604 ']' 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.636 20:47:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:00.636 [2024-11-26 20:47:55.596286] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:00.636 [2024-11-26 20:47:55.596478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73604 ] 00:20:00.895 [2024-11-26 20:47:55.791867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.154 [2024-11-26 20:47:55.908845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 xnvme_bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73604 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73604 ']' 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73604 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:02.090 20:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73604 00:20:02.090 killing process with pid 73604 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73604' 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73604 00:20:02.090 20:47:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73604 00:20:04.630 ************************************ 00:20:04.630 END TEST xnvme_rpc 00:20:04.630 ************************************ 00:20:04.630 00:20:04.630 real 0m4.080s 00:20:04.630 user 0m4.200s 00:20:04.630 sys 0m0.521s 00:20:04.630 20:47:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.630 20:47:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 20:47:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:04.630 20:47:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.630 20:47:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.630 20:47:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:04.630 ************************************ 00:20:04.630 START TEST xnvme_bdevperf 00:20:04.630 ************************************ 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:04.630 20:47:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:04.889 { 00:20:04.889 "subsystems": [ 00:20:04.889 { 00:20:04.889 "subsystem": "bdev", 00:20:04.889 "config": [ 00:20:04.889 { 00:20:04.889 "params": { 00:20:04.889 "io_mechanism": "io_uring_cmd", 00:20:04.889 "conserve_cpu": true, 00:20:04.889 "filename": "/dev/ng0n1", 00:20:04.889 "name": "xnvme_bdev" 00:20:04.889 }, 00:20:04.889 "method": "bdev_xnvme_create" 00:20:04.889 }, 00:20:04.889 { 00:20:04.889 "method": "bdev_wait_for_examine" 00:20:04.889 } 00:20:04.889 ] 00:20:04.889 } 00:20:04.889 ] 00:20:04.889 } 00:20:04.889 [2024-11-26 20:47:59.689732] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:04.889 [2024-11-26 20:47:59.689864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73684 ] 00:20:04.889 [2024-11-26 20:47:59.860068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.148 [2024-11-26 20:47:59.978175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.407 Running I/O for 5 seconds... 00:20:07.713 50304.00 IOPS, 196.50 MiB/s [2024-11-26T20:48:03.641Z] 54464.00 IOPS, 212.75 MiB/s [2024-11-26T20:48:04.577Z] 53290.67 IOPS, 208.17 MiB/s [2024-11-26T20:48:05.513Z] 53248.00 IOPS, 208.00 MiB/s 00:20:10.519 Latency(us) 00:20:10.519 [2024-11-26T20:48:05.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.519 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:10.519 xnvme_bdev : 5.00 53512.88 209.03 0.00 0.00 1192.12 791.89 4431.48 00:20:10.519 [2024-11-26T20:48:05.513Z] =================================================================================================================== 00:20:10.519 [2024-11-26T20:48:05.513Z] Total : 53512.88 209.03 0.00 0.00 1192.12 791.89 4431.48 00:20:11.895 20:48:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:11.895 20:48:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:11.895 20:48:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:11.895 20:48:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:11.895 20:48:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:11.895 { 00:20:11.895 "subsystems": [ 00:20:11.895 { 00:20:11.895 "subsystem": "bdev", 00:20:11.895 "config": [ 00:20:11.895 { 00:20:11.895 "params": { 00:20:11.895 "io_mechanism": "io_uring_cmd", 00:20:11.895 "conserve_cpu": true, 00:20:11.895 "filename": "/dev/ng0n1", 00:20:11.895 "name": "xnvme_bdev" 00:20:11.895 }, 00:20:11.895 "method": "bdev_xnvme_create" 00:20:11.895 }, 00:20:11.895 { 00:20:11.895 "method": "bdev_wait_for_examine" 00:20:11.895 } 00:20:11.895 ] 00:20:11.895 } 00:20:11.895 ] 00:20:11.895 } 00:20:11.895 [2024-11-26 20:48:06.737205] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:11.895 [2024-11-26 20:48:06.737375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73765 ] 00:20:12.154 [2024-11-26 20:48:06.924450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.154 [2024-11-26 20:48:07.039564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.411 Running I/O for 5 seconds... 00:20:14.716 42978.00 IOPS, 167.88 MiB/s [2024-11-26T20:48:10.644Z] 45001.50 IOPS, 175.79 MiB/s [2024-11-26T20:48:11.616Z] 45979.67 IOPS, 179.61 MiB/s [2024-11-26T20:48:12.554Z] 46052.75 IOPS, 179.89 MiB/s 00:20:17.560 Latency(us) 00:20:17.560 [2024-11-26T20:48:12.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.560 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:17.560 xnvme_bdev : 5.00 45857.54 179.13 0.00 0.00 1390.50 60.46 8301.23 00:20:17.560 [2024-11-26T20:48:12.554Z] =================================================================================================================== 00:20:17.560 [2024-11-26T20:48:12.554Z] Total : 45857.54 179.13 0.00 0.00 1390.50 60.46 8301.23 00:20:18.936 20:48:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:18.936 20:48:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:18.936 20:48:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:18.936 20:48:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:18.936 20:48:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:18.936 { 00:20:18.936 "subsystems": [ 00:20:18.936 { 00:20:18.936 "subsystem": "bdev", 00:20:18.936 "config": [ 00:20:18.936 { 00:20:18.936 "params": { 00:20:18.936 "io_mechanism": "io_uring_cmd", 00:20:18.936 "conserve_cpu": true, 00:20:18.936 "filename": "/dev/ng0n1", 00:20:18.936 "name": "xnvme_bdev" 00:20:18.936 }, 00:20:18.936 "method": "bdev_xnvme_create" 00:20:18.936 }, 00:20:18.936 { 00:20:18.936 "method": "bdev_wait_for_examine" 00:20:18.936 } 00:20:18.936 ] 00:20:18.936 } 00:20:18.936 ] 00:20:18.936 } 00:20:18.936 [2024-11-26 20:48:13.723812] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:18.936 [2024-11-26 20:48:13.724107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73839 ] 00:20:18.936 [2024-11-26 20:48:13.910036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.194 [2024-11-26 20:48:14.074276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.453 Running I/O for 5 seconds... 00:20:21.764 97536.00 IOPS, 381.00 MiB/s [2024-11-26T20:48:17.693Z] 97120.00 IOPS, 379.38 MiB/s [2024-11-26T20:48:18.627Z] 96021.33 IOPS, 375.08 MiB/s [2024-11-26T20:48:19.562Z] 95600.00 IOPS, 373.44 MiB/s [2024-11-26T20:48:19.562Z] 94822.40 IOPS, 370.40 MiB/s 00:20:24.568 Latency(us) 00:20:24.568 [2024-11-26T20:48:19.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.568 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:24.568 xnvme_bdev : 5.00 94800.67 370.32 0.00 0.00 672.30 444.71 2371.78 00:20:24.568 [2024-11-26T20:48:19.562Z] =================================================================================================================== 00:20:24.568 [2024-11-26T20:48:19.562Z] Total : 94800.67 370.32 0.00 0.00 672.30 444.71 2371.78 00:20:25.943 20:48:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:25.943 20:48:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:25.943 20:48:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:25.943 20:48:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:25.943 20:48:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:25.943 { 00:20:25.943 "subsystems": [ 00:20:25.943 { 00:20:25.943 "subsystem": "bdev", 00:20:25.943 "config": [ 00:20:25.943 { 00:20:25.943 "params": { 00:20:25.943 "io_mechanism": "io_uring_cmd", 00:20:25.943 "conserve_cpu": true, 00:20:25.943 "filename": "/dev/ng0n1", 00:20:25.943 "name": "xnvme_bdev" 00:20:25.943 }, 00:20:25.943 "method": "bdev_xnvme_create" 00:20:25.943 }, 00:20:25.943 { 00:20:25.943 "method": "bdev_wait_for_examine" 00:20:25.943 } 00:20:25.943 ] 00:20:25.943 } 00:20:25.943 ] 00:20:25.943 } 00:20:25.943 [2024-11-26 20:48:20.727826] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:25.943 [2024-11-26 20:48:20.728840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73919 ] 00:20:25.943 [2024-11-26 20:48:20.923429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.202 [2024-11-26 20:48:21.037770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.459 Running I/O for 5 seconds... 00:20:28.766 44264.00 IOPS, 172.91 MiB/s [2024-11-26T20:48:24.692Z] 43225.50 IOPS, 168.85 MiB/s [2024-11-26T20:48:25.625Z] 43089.67 IOPS, 168.32 MiB/s [2024-11-26T20:48:26.560Z] 42875.75 IOPS, 167.48 MiB/s [2024-11-26T20:48:26.561Z] 42180.80 IOPS, 164.77 MiB/s 00:20:31.567 Latency(us) 00:20:31.567 [2024-11-26T20:48:26.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.567 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:31.567 xnvme_bdev : 5.00 42154.09 164.66 0.00 0.00 1511.00 108.74 13668.94 00:20:31.567 [2024-11-26T20:48:26.561Z] =================================================================================================================== 00:20:31.567 [2024-11-26T20:48:26.561Z] Total : 42154.09 164.66 0.00 0.00 1511.00 108.74 13668.94 00:20:32.945 00:20:32.945 real 0m28.178s 00:20:32.945 user 0m15.837s 00:20:32.945 sys 0m10.050s 00:20:32.945 20:48:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.945 20:48:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 ************************************ 00:20:32.945 END TEST xnvme_bdevperf 00:20:32.945 ************************************ 00:20:32.945 20:48:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:32.945 20:48:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:32.945 20:48:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.945 20:48:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 ************************************ 00:20:32.945 START TEST xnvme_fio_plugin 00:20:32.945 ************************************ 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:32.945 20:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:32.945 { 00:20:32.945 "subsystems": [ 00:20:32.945 { 00:20:32.945 "subsystem": "bdev", 00:20:32.945 "config": [ 00:20:32.945 { 00:20:32.945 "params": { 00:20:32.945 "io_mechanism": "io_uring_cmd", 00:20:32.945 "conserve_cpu": true, 00:20:32.945 "filename": "/dev/ng0n1", 00:20:32.945 "name": "xnvme_bdev" 00:20:32.945 }, 00:20:32.945 "method": "bdev_xnvme_create" 00:20:32.945 }, 00:20:32.945 { 00:20:32.945 "method": "bdev_wait_for_examine" 00:20:32.945 } 00:20:32.945 ] 00:20:32.945 } 00:20:32.945 ] 00:20:32.945 } 00:20:33.266 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:33.266 fio-3.35 00:20:33.266 Starting 1 thread 00:20:39.855 00:20:39.855 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74043: Tue Nov 26 20:48:33 2024 00:20:39.855 read: IOPS=46.5k, BW=182MiB/s (190MB/s)(908MiB/5001msec) 00:20:39.855 slat (nsec): min=2551, max=48179, avg=3827.17, stdev=999.21 00:20:39.855 clat (usec): min=788, max=3515, avg=1225.25, stdev=161.34 00:20:39.855 lat (usec): min=791, max=3551, avg=1229.08, stdev=161.56 00:20:39.855 clat percentiles (usec): 00:20:39.855 | 1.00th=[ 930], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:20:39.855 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:20:39.855 | 70.00th=[ 1287], 80.00th=[ 1319], 90.00th=[ 1401], 95.00th=[ 1483], 00:20:39.855 | 99.00th=[ 1827], 99.50th=[ 1926], 99.90th=[ 2114], 99.95th=[ 2376], 00:20:39.855 | 99.99th=[ 3261] 00:20:39.855 bw ( KiB/s): min=176128, max=204288, per=100.00%, avg=186880.00, stdev=9855.17, samples=9 00:20:39.855 iops : min=44032, max=51072, avg=46720.00, stdev=2463.79, samples=9 00:20:39.855 lat (usec) : 1000=4.02% 00:20:39.855 lat (msec) : 2=95.72%, 4=0.26% 00:20:39.855 cpu : usr=39.52%, sys=58.20%, ctx=9, majf=0, minf=762 00:20:39.855 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:39.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:39.855 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:39.855 issued rwts: total=232448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:39.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:39.855 00:20:39.855 Run status group 0 (all jobs): 00:20:39.855 READ: bw=182MiB/s (190MB/s), 182MiB/s-182MiB/s (190MB/s-190MB/s), io=908MiB (952MB), run=5001-5001msec 00:20:40.423 ----------------------------------------------------- 00:20:40.423 Suppressions used: 00:20:40.423 count bytes template 00:20:40.423 1 11 /usr/src/fio/parse.c 00:20:40.423 1 8 libtcmalloc_minimal.so 00:20:40.423 1 904 libcrypto.so 00:20:40.423 ----------------------------------------------------- 00:20:40.423 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:40.423 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:40.681 20:48:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:40.681 { 00:20:40.681 "subsystems": [ 00:20:40.681 { 00:20:40.681 "subsystem": "bdev", 00:20:40.681 "config": [ 00:20:40.681 { 00:20:40.681 "params": { 00:20:40.681 "io_mechanism": "io_uring_cmd", 00:20:40.681 "conserve_cpu": true, 00:20:40.681 "filename": "/dev/ng0n1", 00:20:40.681 "name": "xnvme_bdev" 00:20:40.681 }, 00:20:40.681 "method": "bdev_xnvme_create" 00:20:40.681 }, 00:20:40.681 { 00:20:40.681 "method": "bdev_wait_for_examine" 00:20:40.681 } 00:20:40.681 ] 00:20:40.681 } 00:20:40.681 ] 00:20:40.681 } 00:20:40.939 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:40.939 fio-3.35 00:20:40.939 Starting 1 thread 00:20:47.548 00:20:47.548 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74144: Tue Nov 26 20:48:41 2024 00:20:47.548 write: IOPS=42.6k, BW=167MiB/s (175MB/s)(833MiB/5001msec); 0 zone resets 00:20:47.548 slat (usec): min=2, max=351, avg= 4.90, stdev= 3.66 00:20:47.548 clat (usec): min=63, max=14434, avg=1328.93, stdev=819.00 00:20:47.548 lat (usec): min=67, max=14439, avg=1333.83, stdev=819.24 00:20:47.548 clat percentiles (usec): 00:20:47.548 | 1.00th=[ 198], 5.00th=[ 619], 10.00th=[ 979], 20.00th=[ 1057], 00:20:47.548 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1237], 00:20:47.548 | 70.00th=[ 1287], 80.00th=[ 1369], 90.00th=[ 1614], 95.00th=[ 2057], 00:20:47.548 | 99.00th=[ 5407], 99.50th=[ 5997], 99.90th=[10421], 99.95th=[12125], 00:20:47.548 | 99.99th=[13566] 00:20:47.548 bw ( KiB/s): min=123139, max=186880, per=99.36%, avg=169485.67, stdev=20815.18, samples=9 00:20:47.548 iops : min=30784, max=46720, avg=42371.33, stdev=5204.00, samples=9 00:20:47.548 lat (usec) : 100=0.09%, 250=1.56%, 500=2.59%, 750=1.64%, 1000=6.11% 00:20:47.548 lat (msec) : 2=82.75%, 4=2.78%, 10=2.36%, 20=0.10% 00:20:47.548 cpu : usr=39.58%, sys=54.44%, ctx=18, majf=0, minf=762 00:20:47.548 IO depths : 1=1.3%, 2=2.7%, 4=5.4%, 8=10.9%, 16=22.5%, 32=54.1%, >=64=3.1% 00:20:47.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.548 complete : 0=0.0%, 4=98.0%, 8=0.2%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:20:47.548 issued rwts: total=0,213261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.548 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:47.548 00:20:47.548 Run status group 0 (all jobs): 00:20:47.548 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=833MiB (874MB), run=5001-5001msec 00:20:48.114 ----------------------------------------------------- 00:20:48.114 Suppressions used: 00:20:48.114 count bytes template 00:20:48.114 1 11 /usr/src/fio/parse.c 00:20:48.114 1 8 libtcmalloc_minimal.so 00:20:48.114 1 904 libcrypto.so 00:20:48.114 ----------------------------------------------------- 00:20:48.114 00:20:48.114 00:20:48.114 real 0m15.086s 00:20:48.114 user 0m7.967s 00:20:48.114 sys 0m6.435s 00:20:48.114 20:48:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.114 ************************************ 00:20:48.114 END TEST xnvme_fio_plugin 00:20:48.114 ************************************ 00:20:48.114 20:48:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:48.114 20:48:42 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73604 00:20:48.114 20:48:42 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73604 ']' 00:20:48.114 20:48:42 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73604 00:20:48.114 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73604) - No such process 00:20:48.114 Process with pid 73604 is not found 00:20:48.114 20:48:42 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73604 is not found' 00:20:48.114 ************************************ 00:20:48.114 END TEST nvme_xnvme 00:20:48.114 ************************************ 00:20:48.114 00:20:48.114 real 3m56.721s 00:20:48.114 user 2m7.066s 00:20:48.114 sys 1m31.548s 00:20:48.114 20:48:42 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.114 20:48:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.114 20:48:43 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:48.114 20:48:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.114 20:48:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.114 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:20:48.114 ************************************ 00:20:48.114 START TEST blockdev_xnvme 00:20:48.114 ************************************ 00:20:48.114 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:48.374 * Looking for test storage... 00:20:48.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.374 20:48:43 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:48.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.374 --rc genhtml_branch_coverage=1 00:20:48.374 --rc genhtml_function_coverage=1 00:20:48.374 --rc genhtml_legend=1 00:20:48.374 --rc geninfo_all_blocks=1 00:20:48.374 --rc geninfo_unexecuted_blocks=1 00:20:48.374 00:20:48.374 ' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:48.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.374 --rc genhtml_branch_coverage=1 00:20:48.374 --rc genhtml_function_coverage=1 00:20:48.374 --rc genhtml_legend=1 00:20:48.374 --rc geninfo_all_blocks=1 00:20:48.374 --rc geninfo_unexecuted_blocks=1 00:20:48.374 00:20:48.374 ' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.374 --rc genhtml_branch_coverage=1 00:20:48.374 --rc genhtml_function_coverage=1 00:20:48.374 --rc genhtml_legend=1 00:20:48.374 --rc geninfo_all_blocks=1 00:20:48.374 --rc geninfo_unexecuted_blocks=1 00:20:48.374 00:20:48.374 ' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.374 --rc genhtml_branch_coverage=1 00:20:48.374 --rc genhtml_function_coverage=1 00:20:48.374 --rc genhtml_legend=1 00:20:48.374 --rc geninfo_all_blocks=1 00:20:48.374 --rc geninfo_unexecuted_blocks=1 00:20:48.374 00:20:48.374 ' 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74280 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74280 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74280 ']' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.374 20:48:43 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.374 20:48:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.634 [2024-11-26 20:48:43.383014] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:48.634 [2024-11-26 20:48:43.383185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74280 ] 00:20:48.634 [2024-11-26 20:48:43.582783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.893 [2024-11-26 20:48:43.744607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.827 20:48:44 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.827 20:48:44 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:49.827 20:48:44 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:49.827 20:48:44 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:20:49.828 20:48:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:49.828 20:48:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:49.828 20:48:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:50.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:50.997 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:50.997 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:50.997 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:50.997 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:50.997 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:50.997 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.998 20:48:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:50.998 20:48:45 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:51.257 nvme0n1 00:20:51.257 nvme0n2 00:20:51.257 nvme0n3 00:20:51.257 nvme1n1 00:20:51.257 nvme2n1 00:20:51.257 nvme3n1 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.257 20:48:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:51.257 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:51.258 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "293a3d50-c050-4090-96e4-1bec86de6e68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "293a3d50-c050-4090-96e4-1bec86de6e68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7542eebf-9687-443f-b155-8d5222011ea4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7542eebf-9687-443f-b155-8d5222011ea4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "95c61436-7829-46be-9989-cc30a402a1a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95c61436-7829-46be-9989-cc30a402a1a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "05fbe579-7b7d-410e-a6f0-240885680947"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "05fbe579-7b7d-410e-a6f0-240885680947",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b31fb77f-bf49-4bf3-b0f4-830d69c5e769"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b31fb77f-bf49-4bf3-b0f4-830d69c5e769",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2db01ba4-fb76-42f4-bb81-7032426f5a9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2db01ba4-fb76-42f4-bb81-7032426f5a9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:51.258 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:51.258 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:20:51.258 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:51.258 20:48:46 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 74280 00:20:51.258 20:48:46 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74280 ']' 00:20:51.258 20:48:46 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74280 00:20:51.258 20:48:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:51.516 20:48:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.516 20:48:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74280 00:20:51.516 20:48:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.516 20:48:46 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.516 killing process with pid 74280 00:20:51.516 20:48:46 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74280' 00:20:51.517 20:48:46 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74280 00:20:51.517 20:48:46 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74280 00:20:54.048 20:48:48 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:54.048 20:48:48 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:54.048 20:48:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:54.048 20:48:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.048 20:48:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:54.048 ************************************ 00:20:54.048 START TEST bdev_hello_world 00:20:54.048 ************************************ 00:20:54.048 20:48:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:54.048 [2024-11-26 20:48:48.897240] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:54.048 [2024-11-26 20:48:48.897424] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74582 ] 00:20:54.307 [2024-11-26 20:48:49.076169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.307 [2024-11-26 20:48:49.195004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.872 [2024-11-26 20:48:49.646384] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:54.872 [2024-11-26 20:48:49.646436] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:54.872 [2024-11-26 20:48:49.646456] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:54.872 [2024-11-26 20:48:49.648580] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:54.872 [2024-11-26 20:48:49.649090] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:54.872 [2024-11-26 20:48:49.649118] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:54.872 [2024-11-26 20:48:49.649358] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:54.872 00:20:54.872 [2024-11-26 20:48:49.649384] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:56.250 00:20:56.250 real 0m2.053s 00:20:56.250 user 0m1.673s 00:20:56.250 sys 0m0.263s 00:20:56.250 20:48:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:56.250 20:48:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:56.250 ************************************ 00:20:56.250 END TEST bdev_hello_world 00:20:56.250 ************************************ 00:20:56.250 20:48:50 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:56.250 20:48:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:56.250 20:48:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:56.250 20:48:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:56.250 ************************************ 00:20:56.250 START TEST bdev_bounds 00:20:56.250 ************************************ 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74624 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:56.250 Process bdevio pid: 74624 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74624' 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74624 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74624 ']' 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.250 20:48:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:56.250 [2024-11-26 20:48:50.973438] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:56.250 [2024-11-26 20:48:50.973626] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74624 ] 00:20:56.250 [2024-11-26 20:48:51.147459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:56.508 [2024-11-26 20:48:51.271775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.508 [2024-11-26 20:48:51.271837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.508 [2024-11-26 20:48:51.271868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.076 20:48:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.076 20:48:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:57.076 20:48:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:57.076 I/O targets: 00:20:57.076 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:57.076 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:57.076 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:57.076 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:57.076 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:57.076 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:57.076 00:20:57.076 00:20:57.076 CUnit - A unit testing framework for C - Version 2.1-3 00:20:57.076 http://cunit.sourceforge.net/ 00:20:57.076 00:20:57.076 00:20:57.076 Suite: bdevio tests on: nvme3n1 00:20:57.076 Test: blockdev write read block ...passed 00:20:57.076 Test: blockdev write zeroes read block ...passed 00:20:57.076 Test: blockdev write zeroes read no split ...passed 00:20:57.076 Test: blockdev write zeroes read split ...passed 00:20:57.368 Test: blockdev write zeroes read split partial ...passed 00:20:57.368 Test: blockdev reset ...passed 00:20:57.368 Test: blockdev write read 8 blocks ...passed 00:20:57.368 Test: blockdev write read size > 128k ...passed 00:20:57.368 Test: blockdev write read invalid size ...passed 00:20:57.368 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.368 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.368 Test: blockdev write read max offset ...passed 00:20:57.368 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.368 Test: blockdev writev readv 8 blocks ...passed 00:20:57.368 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.368 Test: blockdev writev readv block ...passed 00:20:57.368 Test: blockdev writev readv size > 128k ...passed 00:20:57.368 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.368 Test: blockdev comparev and writev ...passed 00:20:57.368 Test: blockdev nvme passthru rw ...passed 00:20:57.368 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.368 Test: blockdev nvme admin passthru ...passed 00:20:57.368 Test: blockdev copy ...passed 00:20:57.368 Suite: bdevio tests on: nvme2n1 00:20:57.368 Test: blockdev write read block ...passed 00:20:57.368 Test: blockdev write zeroes read block ...passed 00:20:57.368 Test: blockdev write zeroes read no split ...passed 00:20:57.368 Test: blockdev write zeroes read split ...passed 00:20:57.368 Test: blockdev write zeroes read split partial ...passed 00:20:57.368 Test: blockdev reset ...passed 00:20:57.368 Test: blockdev write read 8 blocks ...passed 00:20:57.368 Test: blockdev write read size > 128k ...passed 00:20:57.368 Test: blockdev write read invalid size ...passed 00:20:57.368 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.368 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.368 Test: blockdev write read max offset ...passed 00:20:57.368 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.368 Test: blockdev writev readv 8 blocks ...passed 00:20:57.368 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.368 Test: blockdev writev readv block ...passed 00:20:57.368 Test: blockdev writev readv size > 128k ...passed 00:20:57.368 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.368 Test: blockdev comparev and writev ...passed 00:20:57.368 Test: blockdev nvme passthru rw ...passed 00:20:57.368 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.368 Test: blockdev nvme admin passthru ...passed 00:20:57.368 Test: blockdev copy ...passed 00:20:57.368 Suite: bdevio tests on: nvme1n1 00:20:57.368 Test: blockdev write read block ...passed 00:20:57.368 Test: blockdev write zeroes read block ...passed 00:20:57.368 Test: blockdev write zeroes read no split ...passed 00:20:57.369 Test: blockdev write zeroes read split ...passed 00:20:57.369 Test: blockdev write zeroes read split partial ...passed 00:20:57.369 Test: blockdev reset ...passed 00:20:57.369 Test: blockdev write read 8 blocks ...passed 00:20:57.369 Test: blockdev write read size > 128k ...passed 00:20:57.369 Test: blockdev write read invalid size ...passed 00:20:57.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.369 Test: blockdev write read max offset ...passed 00:20:57.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.369 Test: blockdev writev readv 8 blocks ...passed 00:20:57.369 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.369 Test: blockdev writev readv block ...passed 00:20:57.369 Test: blockdev writev readv size > 128k ...passed 00:20:57.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.369 Test: blockdev comparev and writev ...passed 00:20:57.369 Test: blockdev nvme passthru rw ...passed 00:20:57.369 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.369 Test: blockdev nvme admin passthru ...passed 00:20:57.369 Test: blockdev copy ...passed 00:20:57.369 Suite: bdevio tests on: nvme0n3 00:20:57.369 Test: blockdev write read block ...passed 00:20:57.369 Test: blockdev write zeroes read block ...passed 00:20:57.369 Test: blockdev write zeroes read no split ...passed 00:20:57.627 Test: blockdev write zeroes read split ...passed 00:20:57.627 Test: blockdev write zeroes read split partial ...passed 00:20:57.627 Test: blockdev reset ...passed 00:20:57.627 Test: blockdev write read 8 blocks ...passed 00:20:57.627 Test: blockdev write read size > 128k ...passed 00:20:57.627 Test: blockdev write read invalid size ...passed 00:20:57.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.627 Test: blockdev write read max offset ...passed 00:20:57.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.627 Test: blockdev writev readv 8 blocks ...passed 00:20:57.627 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.627 Test: blockdev writev readv block ...passed 00:20:57.627 Test: blockdev writev readv size > 128k ...passed 00:20:57.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.627 Test: blockdev comparev and writev ...passed 00:20:57.627 Test: blockdev nvme passthru rw ...passed 00:20:57.627 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.627 Test: blockdev nvme admin passthru ...passed 00:20:57.627 Test: blockdev copy ...passed 00:20:57.627 Suite: bdevio tests on: nvme0n2 00:20:57.627 Test: blockdev write read block ...passed 00:20:57.627 Test: blockdev write zeroes read block ...passed 00:20:57.627 Test: blockdev write zeroes read no split ...passed 00:20:57.627 Test: blockdev write zeroes read split ...passed 00:20:57.627 Test: blockdev write zeroes read split partial ...passed 00:20:57.627 Test: blockdev reset ...passed 00:20:57.627 Test: blockdev write read 8 blocks ...passed 00:20:57.627 Test: blockdev write read size > 128k ...passed 00:20:57.627 Test: blockdev write read invalid size ...passed 00:20:57.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.627 Test: blockdev write read max offset ...passed 00:20:57.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.627 Test: blockdev writev readv 8 blocks ...passed 00:20:57.627 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.627 Test: blockdev writev readv block ...passed 00:20:57.627 Test: blockdev writev readv size > 128k ...passed 00:20:57.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.627 Test: blockdev comparev and writev ...passed 00:20:57.627 Test: blockdev nvme passthru rw ...passed 00:20:57.627 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.627 Test: blockdev nvme admin passthru ...passed 00:20:57.627 Test: blockdev copy ...passed 00:20:57.627 Suite: bdevio tests on: nvme0n1 00:20:57.627 Test: blockdev write read block ...passed 00:20:57.627 Test: blockdev write zeroes read block ...passed 00:20:57.627 Test: blockdev write zeroes read no split ...passed 00:20:57.627 Test: blockdev write zeroes read split ...passed 00:20:57.627 Test: blockdev write zeroes read split partial ...passed 00:20:57.627 Test: blockdev reset ...passed 00:20:57.627 Test: blockdev write read 8 blocks ...passed 00:20:57.627 Test: blockdev write read size > 128k ...passed 00:20:57.627 Test: blockdev write read invalid size ...passed 00:20:57.627 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:57.627 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:57.627 Test: blockdev write read max offset ...passed 00:20:57.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:57.627 Test: blockdev writev readv 8 blocks ...passed 00:20:57.627 Test: blockdev writev readv 30 x 1block ...passed 00:20:57.627 Test: blockdev writev readv block ...passed 00:20:57.627 Test: blockdev writev readv size > 128k ...passed 00:20:57.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:57.627 Test: blockdev comparev and writev ...passed 00:20:57.627 Test: blockdev nvme passthru rw ...passed 00:20:57.627 Test: blockdev nvme passthru vendor specific ...passed 00:20:57.627 Test: blockdev nvme admin passthru ...passed 00:20:57.627 Test: blockdev copy ...passed 00:20:57.627 00:20:57.627 Run Summary: Type Total Ran Passed Failed Inactive 00:20:57.627 suites 6 6 n/a 0 0 00:20:57.627 tests 138 138 138 0 0 00:20:57.627 asserts 780 780 780 0 n/a 00:20:57.627 00:20:57.627 Elapsed time = 1.597 seconds 00:20:57.627 0 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74624 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74624 ']' 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74624 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.627 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74624 00:20:57.885 killing process with pid 74624 00:20:57.885 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.885 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.885 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74624' 00:20:57.885 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74624 00:20:57.885 20:48:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74624 00:20:59.258 20:48:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:59.258 00:20:59.258 real 0m2.976s 00:20:59.258 user 0m7.568s 00:20:59.258 sys 0m0.402s 00:20:59.258 20:48:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.258 ************************************ 00:20:59.258 END TEST bdev_bounds 00:20:59.258 ************************************ 00:20:59.258 20:48:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:59.258 20:48:53 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:59.258 20:48:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:59.258 20:48:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.258 20:48:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:59.258 ************************************ 00:20:59.258 START TEST bdev_nbd 00:20:59.258 ************************************ 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:59.258 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74689 00:20:59.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74689 /var/tmp/spdk-nbd.sock 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74689 ']' 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.259 20:48:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:59.259 [2024-11-26 20:48:54.014460] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:20:59.259 [2024-11-26 20:48:54.014600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.259 [2024-11-26 20:48:54.187235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.517 [2024-11-26 20:48:54.303113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:00.085 20:48:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.343 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.344 1+0 records in 00:21:00.344 1+0 records out 00:21:00.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624273 s, 6.6 MB/s 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:00.344 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.602 1+0 records in 00:21:00.602 1+0 records out 00:21:00.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505023 s, 8.1 MB/s 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:00.602 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.861 1+0 records in 00:21:00.861 1+0 records out 00:21:00.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540909 s, 7.6 MB/s 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:00.861 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.120 1+0 records in 00:21:01.120 1+0 records out 00:21:01.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570177 s, 7.2 MB/s 00:21:01.120 20:48:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:01.120 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.378 1+0 records in 00:21:01.378 1+0 records out 00:21:01.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812388 s, 5.0 MB/s 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:01.378 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.636 1+0 records in 00:21:01.636 1+0 records out 00:21:01.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850885 s, 4.8 MB/s 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:01.636 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd0", 00:21:01.894 "bdev_name": "nvme0n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd1", 00:21:01.894 "bdev_name": "nvme0n2" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd2", 00:21:01.894 "bdev_name": "nvme0n3" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd3", 00:21:01.894 "bdev_name": "nvme1n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd4", 00:21:01.894 "bdev_name": "nvme2n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd5", 00:21:01.894 "bdev_name": "nvme3n1" 00:21:01.894 } 00:21:01.894 ]' 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd0", 00:21:01.894 "bdev_name": "nvme0n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd1", 00:21:01.894 "bdev_name": "nvme0n2" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd2", 00:21:01.894 "bdev_name": "nvme0n3" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd3", 00:21:01.894 "bdev_name": "nvme1n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd4", 00:21:01.894 "bdev_name": "nvme2n1" 00:21:01.894 }, 00:21:01.894 { 00:21:01.894 "nbd_device": "/dev/nbd5", 00:21:01.894 "bdev_name": "nvme3n1" 00:21:01.894 } 00:21:01.894 ]' 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.894 20:48:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.152 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.410 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.978 20:48:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:03.237 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.496 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:03.754 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:21:04.012 /dev/nbd0 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.012 1+0 records in 00:21:04.012 1+0 records out 00:21:04.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589456 s, 6.9 MB/s 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:04.012 20:48:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:21:04.282 /dev/nbd1 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.282 1+0 records in 00:21:04.282 1+0 records out 00:21:04.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480724 s, 8.5 MB/s 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:04.282 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:21:04.552 /dev/nbd10 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.552 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.811 1+0 records in 00:21:04.811 1+0 records out 00:21:04.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589919 s, 6.9 MB/s 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:21:04.811 /dev/nbd11 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.811 1+0 records in 00:21:04.811 1+0 records out 00:21:04.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664371 s, 6.2 MB/s 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:04.811 20:48:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:21:05.071 /dev/nbd12 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.071 1+0 records in 00:21:05.071 1+0 records out 00:21:05.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726428 s, 5.6 MB/s 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:05.071 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:21:05.330 /dev/nbd13 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.330 1+0 records in 00:21:05.330 1+0 records out 00:21:05.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842875 s, 4.9 MB/s 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:05.330 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:05.589 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd0", 00:21:05.589 "bdev_name": "nvme0n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd1", 00:21:05.589 "bdev_name": "nvme0n2" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd10", 00:21:05.589 "bdev_name": "nvme0n3" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd11", 00:21:05.589 "bdev_name": "nvme1n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd12", 00:21:05.589 "bdev_name": "nvme2n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd13", 00:21:05.589 "bdev_name": "nvme3n1" 00:21:05.589 } 00:21:05.589 ]' 00:21:05.589 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd0", 00:21:05.589 "bdev_name": "nvme0n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd1", 00:21:05.589 "bdev_name": "nvme0n2" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd10", 00:21:05.589 "bdev_name": "nvme0n3" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd11", 00:21:05.589 "bdev_name": "nvme1n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd12", 00:21:05.589 "bdev_name": "nvme2n1" 00:21:05.589 }, 00:21:05.589 { 00:21:05.589 "nbd_device": "/dev/nbd13", 00:21:05.589 "bdev_name": "nvme3n1" 00:21:05.589 } 00:21:05.589 ]' 00:21:05.589 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:05.848 /dev/nbd1 00:21:05.848 /dev/nbd10 00:21:05.848 /dev/nbd11 00:21:05.848 /dev/nbd12 00:21:05.848 /dev/nbd13' 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:05.848 /dev/nbd1 00:21:05.848 /dev/nbd10 00:21:05.848 /dev/nbd11 00:21:05.848 /dev/nbd12 00:21:05.848 /dev/nbd13' 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:05.848 256+0 records in 00:21:05.848 256+0 records out 00:21:05.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00960068 s, 109 MB/s 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:05.848 256+0 records in 00:21:05.848 256+0 records out 00:21:05.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118449 s, 8.9 MB/s 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:05.848 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:06.106 256+0 records in 00:21:06.106 256+0 records out 00:21:06.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12797 s, 8.2 MB/s 00:21:06.106 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:06.106 20:49:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:06.106 256+0 records in 00:21:06.106 256+0 records out 00:21:06.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12394 s, 8.5 MB/s 00:21:06.106 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:06.106 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:06.365 256+0 records in 00:21:06.365 256+0 records out 00:21:06.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126362 s, 8.3 MB/s 00:21:06.365 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:06.365 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:06.365 256+0 records in 00:21:06.365 256+0 records out 00:21:06.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140868 s, 7.4 MB/s 00:21:06.365 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:06.365 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:06.623 256+0 records in 00:21:06.623 256+0 records out 00:21:06.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129356 s, 8.1 MB/s 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:06.623 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.624 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.882 20:49:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.141 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.399 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.658 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.917 20:49:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:08.175 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:08.433 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:08.434 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:08.434 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:08.434 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:08.692 malloc_lvol_verify 00:21:08.692 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:08.950 43fffde3-5894-42b0-aa21-385236cf8d7f 00:21:08.950 20:49:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:09.208 b3cf06d0-5471-4699-9f8b-a6e76ce26486 00:21:09.208 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:09.467 /dev/nbd0 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:09.467 mke2fs 1.47.0 (5-Feb-2023) 00:21:09.467 Discarding device blocks: 0/4096 done 00:21:09.467 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:09.467 00:21:09.467 Allocating group tables: 0/1 done 00:21:09.467 Writing inode tables: 0/1 done 00:21:09.467 Creating journal (1024 blocks): done 00:21:09.467 Writing superblocks and filesystem accounting information: 0/1 done 00:21:09.467 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:09.467 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:09.726 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74689 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74689 ']' 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74689 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74689 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.984 killing process with pid 74689 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74689' 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74689 00:21:09.984 20:49:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74689 00:21:11.360 20:49:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:11.360 00:21:11.360 real 0m12.095s 00:21:11.360 user 0m16.072s 00:21:11.360 sys 0m4.892s 00:21:11.360 ************************************ 00:21:11.360 END TEST bdev_nbd 00:21:11.360 ************************************ 00:21:11.360 20:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.360 20:49:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:11.360 20:49:06 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:11.360 20:49:06 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:21:11.360 20:49:06 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:21:11.360 20:49:06 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:11.360 20:49:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:11.360 20:49:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.360 20:49:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.360 ************************************ 00:21:11.360 START TEST bdev_fio 00:21:11.360 ************************************ 00:21:11.360 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:11.360 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:11.360 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:11.361 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:11.361 ************************************ 00:21:11.361 START TEST bdev_fio_rw_verify 00:21:11.361 ************************************ 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:11.361 20:49:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:11.619 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:11.619 fio-3.35 00:21:11.619 Starting 6 threads 00:21:23.821 00:21:23.821 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75109: Tue Nov 26 20:49:17 2024 00:21:23.821 read: IOPS=31.7k, BW=124MiB/s (130MB/s)(1240MiB/10001msec) 00:21:23.821 slat (usec): min=2, max=1116, avg= 6.55, stdev= 5.45 00:21:23.821 clat (usec): min=128, max=75678, avg=575.88, stdev=442.85 00:21:23.821 lat (usec): min=131, max=75695, avg=582.43, stdev=443.30 00:21:23.821 clat percentiles (usec): 00:21:23.821 | 50.000th=[ 570], 99.000th=[ 1139], 99.900th=[ 1860], 99.990th=[ 4228], 00:21:23.821 | 99.999th=[76022] 00:21:23.821 write: IOPS=32.1k, BW=125MiB/s (131MB/s)(1254MiB/10001msec); 0 zone resets 00:21:23.821 slat (usec): min=9, max=1703, avg=26.05, stdev=32.03 00:21:23.821 clat (usec): min=93, max=5716, avg=663.99, stdev=257.29 00:21:23.822 lat (usec): min=112, max=5740, avg=690.04, stdev=261.11 00:21:23.822 clat percentiles (usec): 00:21:23.822 | 50.000th=[ 652], 99.000th=[ 1385], 99.900th=[ 2212], 99.990th=[ 4146], 00:21:23.822 | 99.999th=[ 5211] 00:21:23.822 bw ( KiB/s): min=100806, max=159203, per=100.00%, avg=128554.26, stdev=2763.69, samples=114 00:21:23.822 iops : min=25201, max=39800, avg=32138.26, stdev=690.90, samples=114 00:21:23.822 lat (usec) : 100=0.01%, 250=4.59%, 500=28.47%, 750=38.78%, 1000=22.91% 00:21:23.822 lat (msec) : 2=5.13%, 4=0.12%, 10=0.01%, 100=0.01% 00:21:23.822 cpu : usr=56.13%, sys=28.60%, ctx=8269, majf=0, minf=26595 00:21:23.822 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.822 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.822 issued rwts: total=317522,321029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.822 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:23.822 00:21:23.822 Run status group 0 (all jobs): 00:21:23.822 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=1240MiB (1301MB), run=10001-10001msec 00:21:23.822 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1254MiB (1315MB), run=10001-10001msec 00:21:24.080 ----------------------------------------------------- 00:21:24.080 Suppressions used: 00:21:24.080 count bytes template 00:21:24.080 6 48 /usr/src/fio/parse.c 00:21:24.080 3294 316224 /usr/src/fio/iolog.c 00:21:24.080 1 8 libtcmalloc_minimal.so 00:21:24.080 1 904 libcrypto.so 00:21:24.080 ----------------------------------------------------- 00:21:24.080 00:21:24.080 00:21:24.080 real 0m12.733s 00:21:24.080 user 0m35.887s 00:21:24.080 sys 0m17.586s 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:24.080 ************************************ 00:21:24.080 END TEST bdev_fio_rw_verify 00:21:24.080 ************************************ 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:24.080 20:49:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:24.081 20:49:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "293a3d50-c050-4090-96e4-1bec86de6e68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "293a3d50-c050-4090-96e4-1bec86de6e68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "7542eebf-9687-443f-b155-8d5222011ea4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7542eebf-9687-443f-b155-8d5222011ea4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "95c61436-7829-46be-9989-cc30a402a1a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95c61436-7829-46be-9989-cc30a402a1a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "05fbe579-7b7d-410e-a6f0-240885680947"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "05fbe579-7b7d-410e-a6f0-240885680947",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b31fb77f-bf49-4bf3-b0f4-830d69c5e769"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b31fb77f-bf49-4bf3-b0f4-830d69c5e769",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2db01ba4-fb76-42f4-bb81-7032426f5a9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2db01ba4-fb76-42f4-bb81-7032426f5a9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:24.081 /home/vagrant/spdk_repo/spdk 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:24.081 00:21:24.081 real 0m12.932s 00:21:24.081 user 0m35.980s 00:21:24.081 sys 0m17.696s 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.081 20:49:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:24.081 ************************************ 00:21:24.081 END TEST bdev_fio 00:21:24.081 ************************************ 00:21:24.081 20:49:19 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:24.081 20:49:19 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:24.081 20:49:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:24.081 20:49:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.081 20:49:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.081 ************************************ 00:21:24.081 START TEST bdev_verify 00:21:24.081 ************************************ 00:21:24.081 20:49:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:24.339 [2024-11-26 20:49:19.155101] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:24.339 [2024-11-26 20:49:19.155232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75284 ] 00:21:24.596 [2024-11-26 20:49:19.338864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.596 [2024-11-26 20:49:19.515339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.596 [2024-11-26 20:49:19.515362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.162 Running I/O for 5 seconds... 00:21:27.470 21504.00 IOPS, 84.00 MiB/s [2024-11-26T20:49:23.398Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-26T20:49:24.333Z] 21408.00 IOPS, 83.62 MiB/s [2024-11-26T20:49:25.268Z] 22504.00 IOPS, 87.91 MiB/s [2024-11-26T20:49:25.268Z] 22022.60 IOPS, 86.03 MiB/s 00:21:30.274 Latency(us) 00:21:30.274 [2024-11-26T20:49:25.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.274 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x0 length 0x80000 00:21:30.274 nvme0n1 : 5.07 1666.14 6.51 0.00 0.00 76690.30 11172.33 78892.86 00:21:30.274 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x80000 length 0x80000 00:21:30.274 nvme0n1 : 5.07 1639.45 6.40 0.00 0.00 77941.39 11734.06 92873.87 00:21:30.274 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x0 length 0x80000 00:21:30.274 nvme0n2 : 5.07 1665.14 6.50 0.00 0.00 76602.92 16103.13 72901.00 00:21:30.274 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x80000 length 0x80000 00:21:30.274 nvme0n2 : 5.08 1637.41 6.40 0.00 0.00 77907.69 14667.58 86882.01 00:21:30.274 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x0 length 0x80000 00:21:30.274 nvme0n3 : 5.06 1668.35 6.52 0.00 0.00 76323.95 12420.63 67907.78 00:21:30.274 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.274 Verification LBA range: start 0x80000 length 0x80000 00:21:30.274 nvme0n3 : 5.09 1634.74 6.39 0.00 0.00 77910.27 11983.73 81888.79 00:21:30.274 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0x0 length 0x20000 00:21:30.275 nvme1n1 : 5.08 1664.59 6.50 0.00 0.00 76366.07 13918.60 78892.86 00:21:30.275 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0x20000 length 0x20000 00:21:30.275 nvme1n1 : 5.09 1633.31 6.38 0.00 0.00 77845.63 6272.73 100363.70 00:21:30.275 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0x0 length 0xbd0bd 00:21:30.275 nvme2n1 : 5.08 2703.17 10.56 0.00 0.00 46839.66 3807.33 76895.57 00:21:30.275 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:30.275 nvme2n1 : 5.08 2668.99 10.43 0.00 0.00 47449.16 4400.27 88879.30 00:21:30.275 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0x0 length 0xa0000 00:21:30.275 nvme3n1 : 5.08 1662.88 6.50 0.00 0.00 76149.85 3994.58 84884.72 00:21:30.275 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.275 Verification LBA range: start 0xa0000 length 0xa0000 00:21:30.275 nvme3n1 : 5.07 1591.48 6.22 0.00 0.00 79475.07 8925.38 103359.63 00:21:30.275 [2024-11-26T20:49:25.269Z] =================================================================================================================== 00:21:30.275 [2024-11-26T20:49:25.269Z] Total : 21835.65 85.30 0.00 0.00 69881.70 3807.33 103359.63 00:21:31.736 00:21:31.736 real 0m7.311s 00:21:31.736 user 0m11.615s 00:21:31.736 sys 0m1.790s 00:21:31.736 20:49:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.736 20:49:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:31.736 ************************************ 00:21:31.736 END TEST bdev_verify 00:21:31.736 ************************************ 00:21:31.736 20:49:26 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:31.736 20:49:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:31.736 20:49:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.736 20:49:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:31.736 ************************************ 00:21:31.736 START TEST bdev_verify_big_io 00:21:31.736 ************************************ 00:21:31.736 20:49:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:31.736 [2024-11-26 20:49:26.537546] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:31.736 [2024-11-26 20:49:26.537683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75385 ] 00:21:31.736 [2024-11-26 20:49:26.709874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:31.995 [2024-11-26 20:49:26.833693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.995 [2024-11-26 20:49:26.833745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.562 Running I/O for 5 seconds... 00:21:39.120 2160.00 IOPS, 135.00 MiB/s [2024-11-26T20:49:34.114Z] 3416.00 IOPS, 213.50 MiB/s 00:21:39.120 Latency(us) 00:21:39.120 [2024-11-26T20:49:34.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.120 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0x8000 00:21:39.120 nvme0n1 : 6.24 87.14 5.45 0.00 0.00 1365382.67 38198.13 1669732.45 00:21:39.120 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x8000 length 0x8000 00:21:39.120 nvme0n1 : 5.52 92.69 5.79 0.00 0.00 1344225.13 5274.09 1621797.55 00:21:39.120 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0x8000 00:21:39.120 nvme0n2 : 6.10 73.44 4.59 0.00 0.00 1582263.07 69405.74 1997287.62 00:21:39.120 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x8000 length 0x8000 00:21:39.120 nvme0n2 : 6.32 101.34 6.33 0.00 0.00 1109071.97 78393.54 1342177.28 00:21:39.120 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0x8000 00:21:39.120 nvme0n3 : 6.46 89.16 5.57 0.00 0.00 1245777.38 100363.70 1150437.67 00:21:39.120 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x8000 length 0x8000 00:21:39.120 nvme0n3 : 6.32 65.85 4.12 0.00 0.00 1648972.18 100863.02 2077179.12 00:21:39.120 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0x2000 00:21:39.120 nvme1n1 : 6.45 96.77 6.05 0.00 0.00 1067671.06 75397.61 1637775.85 00:21:39.120 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x2000 length 0x2000 00:21:39.120 nvme1n1 : 6.11 60.21 3.76 0.00 0.00 1753619.97 14667.58 2700332.86 00:21:39.120 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0xbd0b 00:21:39.120 nvme2n1 : 6.46 91.58 5.72 0.00 0.00 1115688.51 5523.75 2300875.34 00:21:39.120 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:39.120 nvme2n1 : 6.48 118.57 7.41 0.00 0.00 874977.85 27462.70 926741.46 00:21:39.120 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:39.120 Verification LBA range: start 0x0 length 0xa000 00:21:39.120 nvme3n1 : 6.47 89.02 5.56 0.00 0.00 1090183.51 1966.08 2396745.14 00:21:39.121 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:39.121 Verification LBA range: start 0xa000 length 0xa000 00:21:39.121 nvme3n1 : 6.49 118.32 7.40 0.00 0.00 832331.61 2590.23 1326198.98 00:21:39.121 [2024-11-26T20:49:34.115Z] =================================================================================================================== 00:21:39.121 [2024-11-26T20:49:34.115Z] Total : 1084.10 67.76 0.00 0.00 1195025.97 1966.08 2700332.86 00:21:40.525 00:21:40.525 real 0m9.013s 00:21:40.525 user 0m16.621s 00:21:40.525 sys 0m0.475s 00:21:40.525 20:49:35 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.525 20:49:35 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.525 ************************************ 00:21:40.525 END TEST bdev_verify_big_io 00:21:40.525 ************************************ 00:21:40.525 20:49:35 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:40.525 20:49:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:40.525 20:49:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.525 20:49:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.525 ************************************ 00:21:40.525 START TEST bdev_write_zeroes 00:21:40.525 ************************************ 00:21:40.526 20:49:35 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:40.784 [2024-11-26 20:49:35.622321] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:40.784 [2024-11-26 20:49:35.622499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75506 ] 00:21:41.044 [2024-11-26 20:49:35.814356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.044 [2024-11-26 20:49:35.977991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.611 Running I/O for 1 seconds... 00:21:42.549 78238.00 IOPS, 305.62 MiB/s 00:21:42.549 Latency(us) 00:21:42.549 [2024-11-26T20:49:37.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.549 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme0n1 : 1.02 11868.23 46.36 0.00 0.00 10775.31 5929.45 27712.37 00:21:42.549 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme0n2 : 1.03 11851.05 46.29 0.00 0.00 10784.55 6303.94 28086.86 00:21:42.549 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme0n3 : 1.03 11833.40 46.22 0.00 0.00 10792.18 6303.94 28586.18 00:21:42.549 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme1n1 : 1.03 11816.29 46.16 0.00 0.00 10800.94 6179.11 28960.67 00:21:42.549 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme2n1 : 1.03 17660.84 68.99 0.00 0.00 7206.67 3136.37 25964.74 00:21:42.549 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:42.549 nvme3n1 : 1.04 11860.58 46.33 0.00 0.00 10680.33 4681.14 28336.52 00:21:42.549 [2024-11-26T20:49:37.543Z] =================================================================================================================== 00:21:42.549 [2024-11-26T20:49:37.543Z] Total : 76890.39 300.35 0.00 0.00 9945.28 3136.37 28960.67 00:21:43.929 00:21:43.929 real 0m3.232s 00:21:43.929 user 0m2.365s 00:21:43.929 sys 0m0.677s 00:21:43.929 20:49:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.929 20:49:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:43.929 ************************************ 00:21:43.929 END TEST bdev_write_zeroes 00:21:43.929 ************************************ 00:21:43.929 20:49:38 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:43.929 20:49:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:43.929 20:49:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.930 20:49:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:43.930 ************************************ 00:21:43.930 START TEST bdev_json_nonenclosed 00:21:43.930 ************************************ 00:21:43.930 20:49:38 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:44.188 [2024-11-26 20:49:38.927070] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:44.188 [2024-11-26 20:49:38.927250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75565 ] 00:21:44.188 [2024-11-26 20:49:39.119938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.446 [2024-11-26 20:49:39.239603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.446 [2024-11-26 20:49:39.239741] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:44.446 [2024-11-26 20:49:39.239776] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:44.446 [2024-11-26 20:49:39.239796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:44.704 00:21:44.704 real 0m0.700s 00:21:44.704 user 0m0.427s 00:21:44.704 sys 0m0.168s 00:21:44.704 20:49:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.704 20:49:39 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:44.704 ************************************ 00:21:44.704 END TEST bdev_json_nonenclosed 00:21:44.704 ************************************ 00:21:44.704 20:49:39 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:44.704 20:49:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:44.704 20:49:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.704 20:49:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:44.704 ************************************ 00:21:44.704 START TEST bdev_json_nonarray 00:21:44.704 ************************************ 00:21:44.704 20:49:39 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:44.704 [2024-11-26 20:49:39.683435] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:44.704 [2024-11-26 20:49:39.683609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75591 ] 00:21:44.964 [2024-11-26 20:49:39.876986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.224 [2024-11-26 20:49:39.995267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.224 [2024-11-26 20:49:39.995399] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:45.224 [2024-11-26 20:49:39.995435] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:45.224 [2024-11-26 20:49:39.995453] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:45.483 00:21:45.483 real 0m0.708s 00:21:45.483 user 0m0.434s 00:21:45.483 sys 0m0.168s 00:21:45.483 20:49:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.483 20:49:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:45.483 ************************************ 00:21:45.483 END TEST bdev_json_nonarray 00:21:45.483 ************************************ 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:45.483 20:49:40 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:46.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:46.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:46.988 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:46.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:46.988 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:47.248 00:21:47.248 real 0m58.966s 00:21:47.248 user 1m39.681s 00:21:47.248 sys 0m29.890s 00:21:47.248 20:49:42 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.248 20:49:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:47.248 ************************************ 00:21:47.248 END TEST blockdev_xnvme 00:21:47.248 ************************************ 00:21:47.248 20:49:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:47.248 20:49:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.248 20:49:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.248 20:49:42 -- common/autotest_common.sh@10 -- # set +x 00:21:47.248 ************************************ 00:21:47.248 START TEST ublk 00:21:47.248 ************************************ 00:21:47.248 20:49:42 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:47.248 * Looking for test storage... 00:21:47.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:47.248 20:49:42 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.248 20:49:42 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.248 20:49:42 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.507 20:49:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.507 20:49:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.507 20:49:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.507 20:49:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.507 20:49:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.507 20:49:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:47.507 20:49:42 ublk -- scripts/common.sh@345 -- # : 1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.507 20:49:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.507 20:49:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@353 -- # local d=1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.507 20:49:42 ublk -- scripts/common.sh@355 -- # echo 1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.507 20:49:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@353 -- # local d=2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.507 20:49:42 ublk -- scripts/common.sh@355 -- # echo 2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.507 20:49:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.507 20:49:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.507 20:49:42 ublk -- scripts/common.sh@368 -- # return 0 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.507 --rc genhtml_branch_coverage=1 00:21:47.507 --rc genhtml_function_coverage=1 00:21:47.507 --rc genhtml_legend=1 00:21:47.507 --rc geninfo_all_blocks=1 00:21:47.507 --rc geninfo_unexecuted_blocks=1 00:21:47.507 00:21:47.507 ' 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:47.507 20:49:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:47.507 20:49:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:47.507 20:49:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:47.507 20:49:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:47.507 20:49:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:47.507 20:49:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:47.507 20:49:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:47.507 20:49:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:47.507 20:49:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.507 20:49:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:47.507 ************************************ 00:21:47.507 START TEST test_save_ublk_config 00:21:47.507 ************************************ 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75887 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75887 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75887 ']' 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.507 20:49:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:47.507 [2024-11-26 20:49:42.438453] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:47.507 [2024-11-26 20:49:42.438642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75887 ] 00:21:47.766 [2024-11-26 20:49:42.644011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.026 [2024-11-26 20:49:42.811769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:49.045 [2024-11-26 20:49:43.746642] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:49.045 [2024-11-26 20:49:43.747646] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:49.045 malloc0 00:21:49.045 [2024-11-26 20:49:43.832806] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:49.045 [2024-11-26 20:49:43.832912] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:49.045 [2024-11-26 20:49:43.832927] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:49.045 [2024-11-26 20:49:43.832936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:49.045 [2024-11-26 20:49:43.840817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:49.045 [2024-11-26 20:49:43.840845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:49.045 [2024-11-26 20:49:43.848664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:49.045 [2024-11-26 20:49:43.848766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:49.045 [2024-11-26 20:49:43.865673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:49.045 0 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.045 20:49:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.304 20:49:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:49.304 "subsystems": [ 00:21:49.304 { 00:21:49.304 "subsystem": "fsdev", 00:21:49.304 "config": [ 00:21:49.304 { 00:21:49.304 "method": "fsdev_set_opts", 00:21:49.304 "params": { 00:21:49.304 "fsdev_io_pool_size": 65535, 00:21:49.304 "fsdev_io_cache_size": 256 00:21:49.304 } 00:21:49.304 } 00:21:49.304 ] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "keyring", 00:21:49.304 "config": [] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "iobuf", 00:21:49.304 "config": [ 00:21:49.304 { 00:21:49.304 "method": "iobuf_set_options", 00:21:49.304 "params": { 00:21:49.304 "small_pool_count": 8192, 00:21:49.304 "large_pool_count": 1024, 00:21:49.304 "small_bufsize": 8192, 00:21:49.304 "large_bufsize": 135168, 00:21:49.304 "enable_numa": false 00:21:49.304 } 00:21:49.304 } 00:21:49.304 ] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "sock", 00:21:49.304 "config": [ 00:21:49.304 { 00:21:49.304 "method": "sock_set_default_impl", 00:21:49.304 "params": { 00:21:49.304 "impl_name": "posix" 00:21:49.304 } 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "method": "sock_impl_set_options", 00:21:49.304 "params": { 00:21:49.304 "impl_name": "ssl", 00:21:49.304 "recv_buf_size": 4096, 00:21:49.304 "send_buf_size": 4096, 00:21:49.304 "enable_recv_pipe": true, 00:21:49.304 "enable_quickack": false, 00:21:49.304 "enable_placement_id": 0, 00:21:49.304 "enable_zerocopy_send_server": true, 00:21:49.304 "enable_zerocopy_send_client": false, 00:21:49.304 "zerocopy_threshold": 0, 00:21:49.304 "tls_version": 0, 00:21:49.304 "enable_ktls": false 00:21:49.304 } 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "method": "sock_impl_set_options", 00:21:49.304 "params": { 00:21:49.304 "impl_name": "posix", 00:21:49.304 "recv_buf_size": 2097152, 00:21:49.304 "send_buf_size": 2097152, 00:21:49.304 "enable_recv_pipe": true, 00:21:49.304 "enable_quickack": false, 00:21:49.304 "enable_placement_id": 0, 00:21:49.304 "enable_zerocopy_send_server": true, 00:21:49.304 "enable_zerocopy_send_client": false, 00:21:49.304 "zerocopy_threshold": 0, 00:21:49.304 "tls_version": 0, 00:21:49.304 "enable_ktls": false 00:21:49.304 } 00:21:49.304 } 00:21:49.304 ] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "vmd", 00:21:49.304 "config": [] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "accel", 00:21:49.304 "config": [ 00:21:49.304 { 00:21:49.304 "method": "accel_set_options", 00:21:49.304 "params": { 00:21:49.304 "small_cache_size": 128, 00:21:49.304 "large_cache_size": 16, 00:21:49.304 "task_count": 2048, 00:21:49.304 "sequence_count": 2048, 00:21:49.304 "buf_count": 2048 00:21:49.304 } 00:21:49.304 } 00:21:49.304 ] 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "subsystem": "bdev", 00:21:49.304 "config": [ 00:21:49.304 { 00:21:49.304 "method": "bdev_set_options", 00:21:49.304 "params": { 00:21:49.304 "bdev_io_pool_size": 65535, 00:21:49.304 "bdev_io_cache_size": 256, 00:21:49.304 "bdev_auto_examine": true, 00:21:49.304 "iobuf_small_cache_size": 128, 00:21:49.304 "iobuf_large_cache_size": 16 00:21:49.304 } 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "method": "bdev_raid_set_options", 00:21:49.304 "params": { 00:21:49.304 "process_window_size_kb": 1024, 00:21:49.304 "process_max_bandwidth_mb_sec": 0 00:21:49.304 } 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "method": "bdev_iscsi_set_options", 00:21:49.304 "params": { 00:21:49.304 "timeout_sec": 30 00:21:49.304 } 00:21:49.304 }, 00:21:49.304 { 00:21:49.304 "method": "bdev_nvme_set_options", 00:21:49.304 "params": { 00:21:49.304 "action_on_timeout": "none", 00:21:49.304 "timeout_us": 0, 00:21:49.304 "timeout_admin_us": 0, 00:21:49.304 "keep_alive_timeout_ms": 10000, 00:21:49.304 "arbitration_burst": 0, 00:21:49.304 "low_priority_weight": 0, 00:21:49.304 "medium_priority_weight": 0, 00:21:49.304 "high_priority_weight": 0, 00:21:49.304 "nvme_adminq_poll_period_us": 10000, 00:21:49.304 "nvme_ioq_poll_period_us": 0, 00:21:49.304 "io_queue_requests": 0, 00:21:49.304 "delay_cmd_submit": true, 00:21:49.305 "transport_retry_count": 4, 00:21:49.305 "bdev_retry_count": 3, 00:21:49.305 "transport_ack_timeout": 0, 00:21:49.305 "ctrlr_loss_timeout_sec": 0, 00:21:49.305 "reconnect_delay_sec": 0, 00:21:49.305 "fast_io_fail_timeout_sec": 0, 00:21:49.305 "disable_auto_failback": false, 00:21:49.305 "generate_uuids": false, 00:21:49.305 "transport_tos": 0, 00:21:49.305 "nvme_error_stat": false, 00:21:49.305 "rdma_srq_size": 0, 00:21:49.305 "io_path_stat": false, 00:21:49.305 "allow_accel_sequence": false, 00:21:49.305 "rdma_max_cq_size": 0, 00:21:49.305 "rdma_cm_event_timeout_ms": 0, 00:21:49.305 "dhchap_digests": [ 00:21:49.305 "sha256", 00:21:49.305 "sha384", 00:21:49.305 "sha512" 00:21:49.305 ], 00:21:49.305 "dhchap_dhgroups": [ 00:21:49.305 "null", 00:21:49.305 "ffdhe2048", 00:21:49.305 "ffdhe3072", 00:21:49.305 "ffdhe4096", 00:21:49.305 "ffdhe6144", 00:21:49.305 "ffdhe8192" 00:21:49.305 ] 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "bdev_nvme_set_hotplug", 00:21:49.305 "params": { 00:21:49.305 "period_us": 100000, 00:21:49.305 "enable": false 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "bdev_malloc_create", 00:21:49.305 "params": { 00:21:49.305 "name": "malloc0", 00:21:49.305 "num_blocks": 8192, 00:21:49.305 "block_size": 4096, 00:21:49.305 "physical_block_size": 4096, 00:21:49.305 "uuid": "56dce266-d990-4c3e-94a6-3781b20bda87", 00:21:49.305 "optimal_io_boundary": 0, 00:21:49.305 "md_size": 0, 00:21:49.305 "dif_type": 0, 00:21:49.305 "dif_is_head_of_md": false, 00:21:49.305 "dif_pi_format": 0 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "bdev_wait_for_examine" 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "scsi", 00:21:49.305 "config": null 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "scheduler", 00:21:49.305 "config": [ 00:21:49.305 { 00:21:49.305 "method": "framework_set_scheduler", 00:21:49.305 "params": { 00:21:49.305 "name": "static" 00:21:49.305 } 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "vhost_scsi", 00:21:49.305 "config": [] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "vhost_blk", 00:21:49.305 "config": [] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "ublk", 00:21:49.305 "config": [ 00:21:49.305 { 00:21:49.305 "method": "ublk_create_target", 00:21:49.305 "params": { 00:21:49.305 "cpumask": "1" 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "ublk_start_disk", 00:21:49.305 "params": { 00:21:49.305 "bdev_name": "malloc0", 00:21:49.305 "ublk_id": 0, 00:21:49.305 "num_queues": 1, 00:21:49.305 "queue_depth": 128 00:21:49.305 } 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "nbd", 00:21:49.305 "config": [] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "nvmf", 00:21:49.305 "config": [ 00:21:49.305 { 00:21:49.305 "method": "nvmf_set_config", 00:21:49.305 "params": { 00:21:49.305 "discovery_filter": "match_any", 00:21:49.305 "admin_cmd_passthru": { 00:21:49.305 "identify_ctrlr": false 00:21:49.305 }, 00:21:49.305 "dhchap_digests": [ 00:21:49.305 "sha256", 00:21:49.305 "sha384", 00:21:49.305 "sha512" 00:21:49.305 ], 00:21:49.305 "dhchap_dhgroups": [ 00:21:49.305 "null", 00:21:49.305 "ffdhe2048", 00:21:49.305 "ffdhe3072", 00:21:49.305 "ffdhe4096", 00:21:49.305 "ffdhe6144", 00:21:49.305 "ffdhe8192" 00:21:49.305 ] 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "nvmf_set_max_subsystems", 00:21:49.305 "params": { 00:21:49.305 "max_subsystems": 1024 00:21:49.305 } 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "method": "nvmf_set_crdt", 00:21:49.305 "params": { 00:21:49.305 "crdt1": 0, 00:21:49.305 "crdt2": 0, 00:21:49.305 "crdt3": 0 00:21:49.305 } 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 }, 00:21:49.305 { 00:21:49.305 "subsystem": "iscsi", 00:21:49.305 "config": [ 00:21:49.305 { 00:21:49.305 "method": "iscsi_set_options", 00:21:49.305 "params": { 00:21:49.305 "node_base": "iqn.2016-06.io.spdk", 00:21:49.305 "max_sessions": 128, 00:21:49.305 "max_connections_per_session": 2, 00:21:49.305 "max_queue_depth": 64, 00:21:49.305 "default_time2wait": 2, 00:21:49.305 "default_time2retain": 20, 00:21:49.305 "first_burst_length": 8192, 00:21:49.305 "immediate_data": true, 00:21:49.305 "allow_duplicated_isid": false, 00:21:49.305 "error_recovery_level": 0, 00:21:49.305 "nop_timeout": 60, 00:21:49.305 "nop_in_interval": 30, 00:21:49.305 "disable_chap": false, 00:21:49.305 "require_chap": false, 00:21:49.305 "mutual_chap": false, 00:21:49.305 "chap_group": 0, 00:21:49.305 "max_large_datain_per_connection": 64, 00:21:49.305 "max_r2t_per_connection": 4, 00:21:49.305 "pdu_pool_size": 36864, 00:21:49.305 "immediate_data_pool_size": 16384, 00:21:49.305 "data_out_pool_size": 2048 00:21:49.305 } 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 } 00:21:49.305 ] 00:21:49.305 }' 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75887 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75887 ']' 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75887 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75887 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.305 killing process with pid 75887 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75887' 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75887 00:21:49.305 20:49:44 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75887 00:21:51.207 [2024-11-26 20:49:45.695072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:51.207 [2024-11-26 20:49:45.733704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:51.207 [2024-11-26 20:49:45.733836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:51.207 [2024-11-26 20:49:45.741660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:51.207 [2024-11-26 20:49:45.741714] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:51.207 [2024-11-26 20:49:45.741730] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:51.207 [2024-11-26 20:49:45.741756] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:51.207 [2024-11-26 20:49:45.741899] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75953 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75953 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75953 ']' 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.111 20:49:47 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:53.111 "subsystems": [ 00:21:53.111 { 00:21:53.111 "subsystem": "fsdev", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "fsdev_set_opts", 00:21:53.111 "params": { 00:21:53.111 "fsdev_io_pool_size": 65535, 00:21:53.111 "fsdev_io_cache_size": 256 00:21:53.111 } 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "keyring", 00:21:53.111 "config": [] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "iobuf", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "iobuf_set_options", 00:21:53.111 "params": { 00:21:53.111 "small_pool_count": 8192, 00:21:53.111 "large_pool_count": 1024, 00:21:53.111 "small_bufsize": 8192, 00:21:53.111 "large_bufsize": 135168, 00:21:53.111 "enable_numa": false 00:21:53.111 } 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "sock", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "sock_set_default_impl", 00:21:53.111 "params": { 00:21:53.111 "impl_name": "posix" 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "sock_impl_set_options", 00:21:53.111 "params": { 00:21:53.111 "impl_name": "ssl", 00:21:53.111 "recv_buf_size": 4096, 00:21:53.111 "send_buf_size": 4096, 00:21:53.111 "enable_recv_pipe": true, 00:21:53.111 "enable_quickack": false, 00:21:53.111 "enable_placement_id": 0, 00:21:53.111 "enable_zerocopy_send_server": true, 00:21:53.111 "enable_zerocopy_send_client": false, 00:21:53.111 "zerocopy_threshold": 0, 00:21:53.111 "tls_version": 0, 00:21:53.111 "enable_ktls": false 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "sock_impl_set_options", 00:21:53.111 "params": { 00:21:53.111 "impl_name": "posix", 00:21:53.111 "recv_buf_size": 2097152, 00:21:53.111 "send_buf_size": 2097152, 00:21:53.111 "enable_recv_pipe": true, 00:21:53.111 "enable_quickack": false, 00:21:53.111 "enable_placement_id": 0, 00:21:53.111 "enable_zerocopy_send_server": true, 00:21:53.111 "enable_zerocopy_send_client": false, 00:21:53.111 "zerocopy_threshold": 0, 00:21:53.111 "tls_version": 0, 00:21:53.111 "enable_ktls": false 00:21:53.111 } 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "vmd", 00:21:53.111 "config": [] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "accel", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "accel_set_options", 00:21:53.111 "params": { 00:21:53.111 "small_cache_size": 128, 00:21:53.111 "large_cache_size": 16, 00:21:53.111 "task_count": 2048, 00:21:53.111 "sequence_count": 2048, 00:21:53.111 "buf_count": 2048 00:21:53.111 } 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "bdev", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "bdev_set_options", 00:21:53.111 "params": { 00:21:53.111 "bdev_io_pool_size": 65535, 00:21:53.111 "bdev_io_cache_size": 256, 00:21:53.111 "bdev_auto_examine": true, 00:21:53.111 "iobuf_small_cache_size": 128, 00:21:53.111 "iobuf_large_cache_size": 16 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_raid_set_options", 00:21:53.111 "params": { 00:21:53.111 "process_window_size_kb": 1024, 00:21:53.111 "process_max_bandwidth_mb_sec": 0 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_iscsi_set_options", 00:21:53.111 "params": { 00:21:53.111 "timeout_sec": 30 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_nvme_set_options", 00:21:53.111 "params": { 00:21:53.111 "action_on_timeout": "none", 00:21:53.111 "timeout_us": 0, 00:21:53.111 "timeout_admin_us": 0, 00:21:53.111 "keep_alive_timeout_ms": 10000, 00:21:53.111 "arbitration_burst": 0, 00:21:53.111 "low_priority_weight": 0, 00:21:53.111 "medium_priority_weight": 0, 00:21:53.111 "high_priority_weight": 0, 00:21:53.111 "nvme_adminq_poll_period_us": 10000, 00:21:53.111 "nvme_ioq_poll_period_us": 0, 00:21:53.111 "io_queue_requests": 0, 00:21:53.111 "delay_cmd_submit": true, 00:21:53.111 "transport_retry_count": 4, 00:21:53.111 "bdev_retry_count": 3, 00:21:53.111 "transport_ack_timeout": 0, 00:21:53.111 "ctrlr_loss_timeout_sec": 0, 00:21:53.111 "reconnect_delay_sec": 0, 00:21:53.111 "fast_io_fail_timeout_sec": 0, 00:21:53.111 "disable_auto_failback": false, 00:21:53.111 "generate_uuids": false, 00:21:53.111 "transport_tos": 0, 00:21:53.111 "nvme_error_stat": false, 00:21:53.111 "rdma_srq_size": 0, 00:21:53.111 "io_path_stat": false, 00:21:53.111 "allow_accel_sequence": false, 00:21:53.111 "rdma_max_cq_size": 0, 00:21:53.111 "rdma_cm_event_timeout_ms": 0, 00:21:53.111 "dhchap_digests": [ 00:21:53.111 "sha256", 00:21:53.111 "sha384", 00:21:53.111 "sha512" 00:21:53.111 ], 00:21:53.111 "dhchap_dhgroups": [ 00:21:53.111 "null", 00:21:53.111 "ffdhe2048", 00:21:53.111 "ffdhe3072", 00:21:53.111 "ffdhe4096", 00:21:53.111 "ffdhe6144", 00:21:53.111 "ffdhe8192" 00:21:53.111 ] 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_nvme_set_hotplug", 00:21:53.111 "params": { 00:21:53.111 "period_us": 100000, 00:21:53.111 "enable": false 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_malloc_create", 00:21:53.111 "params": { 00:21:53.111 "name": "malloc0", 00:21:53.111 "num_blocks": 8192, 00:21:53.111 "block_size": 4096, 00:21:53.111 "physical_block_size": 4096, 00:21:53.111 "uuid": "56dce266-d990-4c3e-94a6-3781b20bda87", 00:21:53.111 "optimal_io_boundary": 0, 00:21:53.111 "md_size": 0, 00:21:53.111 "dif_type": 0, 00:21:53.111 "dif_is_head_of_md": false, 00:21:53.111 "dif_pi_format": 0 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "bdev_wait_for_examine" 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "scsi", 00:21:53.111 "config": null 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "scheduler", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "framework_set_scheduler", 00:21:53.111 "params": { 00:21:53.111 "name": "static" 00:21:53.111 } 00:21:53.111 } 00:21:53.111 ] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "vhost_scsi", 00:21:53.111 "config": [] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "vhost_blk", 00:21:53.111 "config": [] 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "subsystem": "ublk", 00:21:53.111 "config": [ 00:21:53.111 { 00:21:53.111 "method": "ublk_create_target", 00:21:53.111 "params": { 00:21:53.111 "cpumask": "1" 00:21:53.111 } 00:21:53.111 }, 00:21:53.111 { 00:21:53.111 "method": "ublk_start_disk", 00:21:53.111 "params": { 00:21:53.111 "bdev_name": "malloc0", 00:21:53.111 "ublk_id": 0, 00:21:53.111 "num_queues": 1, 00:21:53.111 "queue_depth": 128 00:21:53.112 } 00:21:53.112 } 00:21:53.112 ] 00:21:53.112 }, 00:21:53.112 { 00:21:53.112 "subsystem": "nbd", 00:21:53.112 "config": [] 00:21:53.112 }, 00:21:53.112 { 00:21:53.112 "subsystem": "nvmf", 00:21:53.112 "config": [ 00:21:53.112 { 00:21:53.112 "method": "nvmf_set_config", 00:21:53.112 "params": { 00:21:53.112 "discovery_filter": "match_any", 00:21:53.112 "admin_cmd_passthru": { 00:21:53.112 "identify_ctrlr": false 00:21:53.112 }, 00:21:53.112 "dhchap_digests": [ 00:21:53.112 "sha256", 00:21:53.112 "sha384", 00:21:53.112 "sha512" 00:21:53.112 ], 00:21:53.112 "dhchap_dhgroups": [ 00:21:53.112 "null", 00:21:53.112 "ffdhe2048", 00:21:53.112 "ffdhe3072", 00:21:53.112 "ffdhe4096", 00:21:53.112 "ffdhe6144", 00:21:53.112 "ffdhe8192" 00:21:53.112 ] 00:21:53.112 } 00:21:53.112 }, 00:21:53.112 { 00:21:53.112 "method": "nvmf_set_max_subsystems", 00:21:53.112 "params": { 00:21:53.112 "max_subsystems": 1024 00:21:53.112 } 00:21:53.112 }, 00:21:53.112 { 00:21:53.112 "method": "nvmf_set_crdt", 00:21:53.112 "params": { 00:21:53.112 "crdt1": 0, 00:21:53.112 "crdt2": 0, 00:21:53.112 "crdt3": 0 00:21:53.112 } 00:21:53.112 } 00:21:53.112 ] 00:21:53.112 }, 00:21:53.112 { 00:21:53.112 "subsystem": "iscsi", 00:21:53.112 "config": [ 00:21:53.112 { 00:21:53.112 "method": "iscsi_set_options", 00:21:53.112 "params": { 00:21:53.112 "node_base": "iqn.2016-06.io.spdk", 00:21:53.112 "max_sessions": 128, 00:21:53.112 "max_connections_per_session": 2, 00:21:53.112 "max_queue_depth": 64, 00:21:53.112 "default_time2wait": 2, 00:21:53.112 "default_time2retain": 20, 00:21:53.112 "first_burst_length": 8192, 00:21:53.112 "immediate_data": true, 00:21:53.112 "allow_duplicated_isid": false, 00:21:53.112 "error_recovery_level": 0, 00:21:53.112 "nop_timeout": 60, 00:21:53.112 "nop_in_interval": 30, 00:21:53.112 "disable_chap": false, 00:21:53.112 "require_chap": false, 00:21:53.112 "mutual_chap": false, 00:21:53.112 "chap_group": 0, 00:21:53.112 "max_large_datain_per_connection": 64, 00:21:53.112 "max_r2t_per_connection": 4, 00:21:53.112 "pdu_pool_size": 36864, 00:21:53.112 "immediate_data_pool_size": 16384, 00:21:53.112 "data_out_pool_size": 2048 00:21:53.112 } 00:21:53.112 } 00:21:53.112 ] 00:21:53.112 } 00:21:53.112 ] 00:21:53.112 }' 00:21:53.112 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.112 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.112 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.112 20:49:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:53.371 [2024-11-26 20:49:48.122649] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:53.371 [2024-11-26 20:49:48.122771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75953 ] 00:21:53.371 [2024-11-26 20:49:48.296434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.628 [2024-11-26 20:49:48.415346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.562 [2024-11-26 20:49:49.482631] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:54.562 [2024-11-26 20:49:49.483867] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:54.562 [2024-11-26 20:49:49.490765] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:54.562 [2024-11-26 20:49:49.490849] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:54.562 [2024-11-26 20:49:49.490862] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:54.562 [2024-11-26 20:49:49.490871] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:54.562 [2024-11-26 20:49:49.499711] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:54.562 [2024-11-26 20:49:49.499753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:54.562 [2024-11-26 20:49:49.506644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:54.562 [2024-11-26 20:49:49.506746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:54.562 [2024-11-26 20:49:49.523633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.821 20:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75953 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75953 ']' 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75953 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75953 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.822 killing process with pid 75953 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75953' 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75953 00:21:54.822 20:49:49 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75953 00:21:56.725 [2024-11-26 20:49:51.232874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:56.725 [2024-11-26 20:49:51.277647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:56.725 [2024-11-26 20:49:51.277795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:56.725 [2024-11-26 20:49:51.286633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:56.725 [2024-11-26 20:49:51.286694] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:56.725 [2024-11-26 20:49:51.286704] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:56.725 [2024-11-26 20:49:51.286728] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:56.725 [2024-11-26 20:49:51.286872] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:58.697 20:49:53 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:58.697 00:21:58.697 real 0m10.916s 00:21:58.697 user 0m8.379s 00:21:58.697 sys 0m3.491s 00:21:58.697 20:49:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.697 20:49:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:58.697 ************************************ 00:21:58.697 END TEST test_save_ublk_config 00:21:58.697 ************************************ 00:21:58.697 20:49:53 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76046 00:21:58.697 20:49:53 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:58.697 20:49:53 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.697 20:49:53 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76046 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@835 -- # '[' -z 76046 ']' 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.697 20:49:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:58.697 [2024-11-26 20:49:53.358007] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:21:58.697 [2024-11-26 20:49:53.358155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76046 ] 00:21:58.697 [2024-11-26 20:49:53.530815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:58.697 [2024-11-26 20:49:53.653160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.697 [2024-11-26 20:49:53.653192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.632 20:49:54 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.632 20:49:54 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:59.632 20:49:54 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:59.632 20:49:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:59.632 20:49:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.632 20:49:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.632 ************************************ 00:21:59.632 START TEST test_create_ublk 00:21:59.632 ************************************ 00:21:59.632 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:59.632 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:59.633 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.633 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:59.633 [2024-11-26 20:49:54.623693] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:59.891 [2024-11-26 20:49:54.626808] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:59.891 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.891 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:59.891 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:59.891 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.891 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.150 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:00.150 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 [2024-11-26 20:49:54.943813] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:00.150 [2024-11-26 20:49:54.944272] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:00.150 [2024-11-26 20:49:54.944293] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:00.150 [2024-11-26 20:49:54.944303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:00.150 [2024-11-26 20:49:54.951691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:00.150 [2024-11-26 20:49:54.951717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:00.150 [2024-11-26 20:49:54.959652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:00.150 [2024-11-26 20:49:54.960239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:00.150 [2024-11-26 20:49:54.982683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.150 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:00.150 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:00.150 20:49:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.150 20:49:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:00.150 20:49:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:00.150 { 00:22:00.150 "ublk_device": "/dev/ublkb0", 00:22:00.150 "id": 0, 00:22:00.150 "queue_depth": 512, 00:22:00.150 "num_queues": 4, 00:22:00.150 "bdev_name": "Malloc0" 00:22:00.150 } 00:22:00.150 ]' 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:00.150 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:00.409 20:49:55 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:00.409 20:49:55 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:00.410 20:49:55 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:00.410 20:49:55 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:00.410 fio: verification read phase will never start because write phase uses all of runtime 00:22:00.410 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:00.410 fio-3.35 00:22:00.410 Starting 1 process 00:22:12.635 00:22:12.635 fio_test: (groupid=0, jobs=1): err= 0: pid=76098: Tue Nov 26 20:50:05 2024 00:22:12.635 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(606MiB/10001msec); 0 zone resets 00:22:12.635 clat (usec): min=40, max=4071, avg=63.61, stdev=100.31 00:22:12.635 lat (usec): min=40, max=4072, avg=64.07, stdev=100.32 00:22:12.635 clat percentiles (usec): 00:22:12.635 | 1.00th=[ 42], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:22:12.635 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:22:12.635 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 68], 95.00th=[ 73], 00:22:12.635 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 2024], 99.95th=[ 2802], 00:22:12.635 | 99.99th=[ 3523] 00:22:12.635 bw ( KiB/s): min=60416, max=68232, per=100.00%, avg=62159.16, stdev=1669.55, samples=19 00:22:12.635 iops : min=15104, max=17058, avg=15539.79, stdev=417.39, samples=19 00:22:12.635 lat (usec) : 50=2.82%, 100=96.83%, 250=0.14%, 500=0.01%, 750=0.02% 00:22:12.635 lat (usec) : 1000=0.01% 00:22:12.635 lat (msec) : 2=0.08%, 4=0.10%, 10=0.01% 00:22:12.635 cpu : usr=3.19%, sys=9.80%, ctx=155262, majf=0, minf=795 00:22:12.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:12.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:12.635 issued rwts: total=0,155262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:12.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:12.635 00:22:12.635 Run status group 0 (all jobs): 00:22:12.635 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=606MiB (636MB), run=10001-10001msec 00:22:12.635 00:22:12.635 Disk stats (read/write): 00:22:12.635 ublkb0: ios=0/153620, merge=0/0, ticks=0/8692, in_queue=8693, util=99.11% 00:22:12.635 20:50:05 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.635 [2024-11-26 20:50:05.477692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:12.635 [2024-11-26 20:50:05.508102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:12.635 [2024-11-26 20:50:05.509009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:12.635 [2024-11-26 20:50:05.516679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:12.635 [2024-11-26 20:50:05.516995] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:12.635 [2024-11-26 20:50:05.517015] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.635 20:50:05 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.635 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.635 [2024-11-26 20:50:05.539721] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:12.635 request: 00:22:12.635 { 00:22:12.635 "ublk_id": 0, 00:22:12.635 "method": "ublk_stop_disk", 00:22:12.635 "req_id": 1 00:22:12.635 } 00:22:12.635 Got JSON-RPC error response 00:22:12.636 response: 00:22:12.636 { 00:22:12.636 "code": -19, 00:22:12.636 "message": "No such device" 00:22:12.636 } 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.636 20:50:05 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 [2024-11-26 20:50:05.555730] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:12.636 [2024-11-26 20:50:05.563638] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:12.636 [2024-11-26 20:50:05.563683] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:05 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:12.636 20:50:06 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:12.636 00:22:12.636 real 0m11.813s 00:22:12.636 user 0m0.685s 00:22:12.636 sys 0m1.097s 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 ************************************ 00:22:12.636 END TEST test_create_ublk 00:22:12.636 ************************************ 00:22:12.636 20:50:06 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:12.636 20:50:06 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:12.636 20:50:06 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.636 20:50:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 ************************************ 00:22:12.636 START TEST test_create_multi_ublk 00:22:12.636 ************************************ 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 [2024-11-26 20:50:06.500624] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:12.636 [2024-11-26 20:50:06.503131] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 [2024-11-26 20:50:06.792800] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:12.636 [2024-11-26 20:50:06.793268] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:12.636 [2024-11-26 20:50:06.793285] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:12.636 [2024-11-26 20:50:06.793299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:12.636 [2024-11-26 20:50:06.808654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:12.636 [2024-11-26 20:50:06.808686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:12.636 [2024-11-26 20:50:06.816670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:12.636 [2024-11-26 20:50:06.817353] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:12.636 [2024-11-26 20:50:06.826603] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.636 [2024-11-26 20:50:07.131822] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:12.636 [2024-11-26 20:50:07.132334] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:12.636 [2024-11-26 20:50:07.132356] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:12.636 [2024-11-26 20:50:07.132366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:12.636 [2024-11-26 20:50:07.139677] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:12.636 [2024-11-26 20:50:07.139702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:12.636 [2024-11-26 20:50:07.147654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:12.636 [2024-11-26 20:50:07.148299] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:12.636 [2024-11-26 20:50:07.154635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.636 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.637 [2024-11-26 20:50:07.443798] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:12.637 [2024-11-26 20:50:07.444293] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:12.637 [2024-11-26 20:50:07.444312] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:12.637 [2024-11-26 20:50:07.444324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:12.637 [2024-11-26 20:50:07.450643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:12.637 [2024-11-26 20:50:07.450673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:12.637 [2024-11-26 20:50:07.458654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:12.637 [2024-11-26 20:50:07.459321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:12.637 [2024-11-26 20:50:07.472654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.637 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.895 [2024-11-26 20:50:07.783802] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:12.895 [2024-11-26 20:50:07.784275] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:12.895 [2024-11-26 20:50:07.784295] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:12.895 [2024-11-26 20:50:07.784305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:12.895 [2024-11-26 20:50:07.791701] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:12.895 [2024-11-26 20:50:07.791739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:12.895 [2024-11-26 20:50:07.799649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:12.895 [2024-11-26 20:50:07.800250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:12.895 [2024-11-26 20:50:07.805113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:12.895 { 00:22:12.895 "ublk_device": "/dev/ublkb0", 00:22:12.895 "id": 0, 00:22:12.895 "queue_depth": 512, 00:22:12.895 "num_queues": 4, 00:22:12.895 "bdev_name": "Malloc0" 00:22:12.895 }, 00:22:12.895 { 00:22:12.895 "ublk_device": "/dev/ublkb1", 00:22:12.895 "id": 1, 00:22:12.895 "queue_depth": 512, 00:22:12.895 "num_queues": 4, 00:22:12.895 "bdev_name": "Malloc1" 00:22:12.895 }, 00:22:12.895 { 00:22:12.895 "ublk_device": "/dev/ublkb2", 00:22:12.895 "id": 2, 00:22:12.895 "queue_depth": 512, 00:22:12.895 "num_queues": 4, 00:22:12.895 "bdev_name": "Malloc2" 00:22:12.895 }, 00:22:12.895 { 00:22:12.895 "ublk_device": "/dev/ublkb3", 00:22:12.895 "id": 3, 00:22:12.895 "queue_depth": 512, 00:22:12.895 "num_queues": 4, 00:22:12.895 "bdev_name": "Malloc3" 00:22:12.895 } 00:22:12.895 ]' 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.895 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:13.152 20:50:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:13.152 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:13.410 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:13.410 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:13.410 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:13.410 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:13.411 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:13.670 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 [2024-11-26 20:50:08.724786] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:13.929 [2024-11-26 20:50:08.767688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:13.929 [2024-11-26 20:50:08.768665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:13.929 [2024-11-26 20:50:08.775661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:13.929 [2024-11-26 20:50:08.775975] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:13.929 [2024-11-26 20:50:08.775990] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 [2024-11-26 20:50:08.791756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:13.929 [2024-11-26 20:50:08.826676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:13.929 [2024-11-26 20:50:08.827534] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:13.929 [2024-11-26 20:50:08.831637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:13.929 [2024-11-26 20:50:08.831967] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:13.929 [2024-11-26 20:50:08.831987] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 [2024-11-26 20:50:08.839816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:13.929 [2024-11-26 20:50:08.869106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:13.929 [2024-11-26 20:50:08.870098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:13.929 [2024-11-26 20:50:08.879656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:13.929 [2024-11-26 20:50:08.879980] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:13.929 [2024-11-26 20:50:08.879996] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.929 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.929 [2024-11-26 20:50:08.895765] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:14.187 [2024-11-26 20:50:08.929718] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:14.187 [2024-11-26 20:50:08.930416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:14.187 [2024-11-26 20:50:08.938684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:14.187 [2024-11-26 20:50:08.938994] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:14.187 [2024-11-26 20:50:08.939008] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:14.187 20:50:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.187 20:50:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:14.446 [2024-11-26 20:50:09.223743] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:14.446 [2024-11-26 20:50:09.231633] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:14.446 [2024-11-26 20:50:09.231685] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:14.446 20:50:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:14.446 20:50:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:14.446 20:50:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:14.446 20:50:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.446 20:50:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.050 20:50:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.050 20:50:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.050 20:50:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:15.050 20:50:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.050 20:50:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.616 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.616 20:50:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.616 20:50:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:15.616 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.616 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.875 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.875 20:50:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:15.875 20:50:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:15.875 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.875 20:50:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:16.134 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:16.393 00:22:16.393 real 0m4.748s 00:22:16.393 user 0m1.112s 00:22:16.393 sys 0m0.237s 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.393 20:50:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.393 ************************************ 00:22:16.393 END TEST test_create_multi_ublk 00:22:16.393 ************************************ 00:22:16.393 20:50:11 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:16.393 20:50:11 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:16.393 20:50:11 ublk -- ublk/ublk.sh@130 -- # killprocess 76046 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@954 -- # '[' -z 76046 ']' 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@958 -- # kill -0 76046 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@959 -- # uname 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76046 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.393 killing process with pid 76046 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76046' 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@973 -- # kill 76046 00:22:16.393 20:50:11 ublk -- common/autotest_common.sh@978 -- # wait 76046 00:22:17.778 [2024-11-26 20:50:12.508648] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:17.778 [2024-11-26 20:50:12.508709] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:19.151 ************************************ 00:22:19.151 END TEST ublk 00:22:19.151 ************************************ 00:22:19.151 00:22:19.151 real 0m31.732s 00:22:19.151 user 0m45.389s 00:22:19.151 sys 0m10.664s 00:22:19.151 20:50:13 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.151 20:50:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:19.151 20:50:13 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:19.151 20:50:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.151 20:50:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.151 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:22:19.151 ************************************ 00:22:19.151 START TEST ublk_recovery 00:22:19.151 ************************************ 00:22:19.151 20:50:13 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:19.151 * Looking for test storage... 00:22:19.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:19.151 20:50:13 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:19.151 20:50:13 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:19.151 20:50:13 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:19.151 20:50:14 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:19.151 20:50:14 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.151 20:50:14 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.151 20:50:14 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.151 20:50:14 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.151 20:50:14 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.152 20:50:14 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.152 --rc genhtml_branch_coverage=1 00:22:19.152 --rc genhtml_function_coverage=1 00:22:19.152 --rc genhtml_legend=1 00:22:19.152 --rc geninfo_all_blocks=1 00:22:19.152 --rc geninfo_unexecuted_blocks=1 00:22:19.152 00:22:19.152 ' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.152 --rc genhtml_branch_coverage=1 00:22:19.152 --rc genhtml_function_coverage=1 00:22:19.152 --rc genhtml_legend=1 00:22:19.152 --rc geninfo_all_blocks=1 00:22:19.152 --rc geninfo_unexecuted_blocks=1 00:22:19.152 00:22:19.152 ' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.152 --rc genhtml_branch_coverage=1 00:22:19.152 --rc genhtml_function_coverage=1 00:22:19.152 --rc genhtml_legend=1 00:22:19.152 --rc geninfo_all_blocks=1 00:22:19.152 --rc geninfo_unexecuted_blocks=1 00:22:19.152 00:22:19.152 ' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:19.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.152 --rc genhtml_branch_coverage=1 00:22:19.152 --rc genhtml_function_coverage=1 00:22:19.152 --rc genhtml_legend=1 00:22:19.152 --rc geninfo_all_blocks=1 00:22:19.152 --rc geninfo_unexecuted_blocks=1 00:22:19.152 00:22:19.152 ' 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:19.152 20:50:14 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76480 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:19.152 20:50:14 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76480 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76480 ']' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.152 20:50:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.410 [2024-11-26 20:50:14.147438] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:19.410 [2024-11-26 20:50:14.147602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76480 ] 00:22:19.410 [2024-11-26 20:50:14.322905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:19.669 [2024-11-26 20:50:14.442371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.669 [2024-11-26 20:50:14.442406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.634 20:50:15 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.634 20:50:15 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:20.634 20:50:15 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:20.634 20:50:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.634 20:50:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.634 [2024-11-26 20:50:15.362664] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:20.635 [2024-11-26 20:50:15.365548] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.635 20:50:15 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 malloc0 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.635 20:50:15 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 [2024-11-26 20:50:15.520809] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:20.635 [2024-11-26 20:50:15.520934] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:20.635 [2024-11-26 20:50:15.520949] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:20.635 [2024-11-26 20:50:15.520961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:20.635 [2024-11-26 20:50:15.529752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:20.635 [2024-11-26 20:50:15.529777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:20.635 [2024-11-26 20:50:15.536675] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:20.635 [2024-11-26 20:50:15.536816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:20.635 [2024-11-26 20:50:15.551650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:20.635 1 00:22:20.635 20:50:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.635 20:50:15 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:22.011 20:50:16 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76516 00:22:22.011 20:50:16 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:22.011 20:50:16 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:22.011 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:22.011 fio-3.35 00:22:22.011 Starting 1 process 00:22:27.276 20:50:21 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76480 00:22:27.276 20:50:21 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:32.543 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76480 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:32.543 20:50:26 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76622 00:22:32.543 20:50:26 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.543 20:50:26 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:32.543 20:50:26 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76622 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76622 ']' 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.543 20:50:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.543 [2024-11-26 20:50:26.731413] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:22:32.543 [2024-11-26 20:50:26.731563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76622 ] 00:22:32.543 [2024-11-26 20:50:26.909054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:32.543 [2024-11-26 20:50:27.058243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.543 [2024-11-26 20:50:27.058251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:33.478 20:50:28 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.478 [2024-11-26 20:50:28.128639] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:33.478 [2024-11-26 20:50:28.132073] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.478 20:50:28 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.478 malloc0 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.478 20:50:28 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.478 [2024-11-26 20:50:28.318856] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:33.478 [2024-11-26 20:50:28.318912] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:33.478 [2024-11-26 20:50:28.318925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:33.478 [2024-11-26 20:50:28.326695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:33.478 [2024-11-26 20:50:28.326721] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:33.478 [2024-11-26 20:50:28.326732] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:33.478 [2024-11-26 20:50:28.326845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:33.478 1 00:22:33.478 20:50:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.478 20:50:28 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76516 00:22:33.478 [2024-11-26 20:50:28.334649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:33.478 [2024-11-26 20:50:28.339068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:33.478 [2024-11-26 20:50:28.348866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:33.478 [2024-11-26 20:50:28.348891] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:29.729 00:23:29.729 fio_test: (groupid=0, jobs=1): err= 0: pid=76519: Tue Nov 26 20:51:16 2024 00:23:29.729 read: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(5062MiB/60002msec) 00:23:29.729 slat (nsec): min=2000, max=1101.3k, avg=6159.45, stdev=3297.10 00:23:29.729 clat (usec): min=694, max=6789.8k, avg=2868.66, stdev=44610.52 00:23:29.729 lat (usec): min=699, max=6789.8k, avg=2874.82, stdev=44610.52 00:23:29.729 clat percentiles (usec): 00:23:29.729 | 1.00th=[ 2024], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2311], 00:23:29.729 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:23:29.729 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 3097], 95.00th=[ 3687], 00:23:29.729 | 99.00th=[ 5145], 99.50th=[ 6390], 99.90th=[ 7832], 99.95th=[ 8848], 00:23:29.729 | 99.99th=[13698] 00:23:29.729 bw ( KiB/s): min=35656, max=103928, per=100.00%, avg=97137.43, stdev=11160.00, samples=106 00:23:29.729 iops : min= 8914, max=25982, avg=24284.38, stdev=2790.01, samples=106 00:23:29.729 write: IOPS=21.6k, BW=84.3MiB/s (88.4MB/s)(5056MiB/60002msec); 0 zone resets 00:23:29.729 slat (usec): min=2, max=924, avg= 6.21, stdev= 3.13 00:23:29.729 clat (usec): min=739, max=6790.0k, avg=3048.11, stdev=50610.31 00:23:29.729 lat (usec): min=744, max=6790.0k, avg=3054.32, stdev=50610.30 00:23:29.729 clat percentiles (usec): 00:23:29.729 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:23:29.729 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:23:29.729 | 70.00th=[ 2606], 80.00th=[ 2704], 90.00th=[ 3228], 95.00th=[ 3621], 00:23:29.729 | 99.00th=[ 5145], 99.50th=[ 6587], 99.90th=[ 7963], 99.95th=[ 8979], 00:23:29.729 | 99.99th=[13829] 00:23:29.729 bw ( KiB/s): min=36072, max=103536, per=100.00%, avg=97047.70, stdev=11003.26, samples=106 00:23:29.729 iops : min= 9018, max=25884, avg=24261.92, stdev=2750.82, samples=106 00:23:29.729 lat (usec) : 750=0.01%, 1000=0.01% 00:23:29.729 lat (msec) : 2=0.94%, 4=95.18%, 10=3.84%, 20=0.03%, >=2000=0.01% 00:23:29.729 cpu : usr=9.57%, sys=26.72%, ctx=90183, majf=0, minf=13 00:23:29.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:29.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:29.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:29.729 issued rwts: total=1295849,1294334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:29.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:29.729 00:23:29.729 Run status group 0 (all jobs): 00:23:29.729 READ: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=5062MiB (5308MB), run=60002-60002msec 00:23:29.729 WRITE: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=5056MiB (5302MB), run=60002-60002msec 00:23:29.729 00:23:29.729 Disk stats (read/write): 00:23:29.729 ublkb1: ios=1293593/1292167, merge=0/0, ticks=3601709/3697505, in_queue=7299214, util=99.90% 00:23:29.729 20:51:16 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.729 [2024-11-26 20:51:16.839962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:29.729 [2024-11-26 20:51:16.883688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:29.729 [2024-11-26 20:51:16.884046] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:29.729 [2024-11-26 20:51:16.894710] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:29.729 [2024-11-26 20:51:16.894885] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:29.729 [2024-11-26 20:51:16.894904] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.729 20:51:16 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.729 [2024-11-26 20:51:16.909830] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:29.729 [2024-11-26 20:51:16.918302] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:29.729 [2024-11-26 20:51:16.918358] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.729 20:51:16 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:29.729 20:51:16 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:29.729 20:51:16 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76622 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76622 ']' 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76622 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76622 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.729 killing process with pid 76622 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76622' 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76622 00:23:29.729 20:51:16 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76622 00:23:29.729 [2024-11-26 20:51:18.759776] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:29.729 [2024-11-26 20:51:18.759835] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:29.729 00:23:29.729 real 1m6.441s 00:23:29.729 user 1m48.012s 00:23:29.729 sys 0m35.716s 00:23:29.729 20:51:20 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.729 20:51:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.729 ************************************ 00:23:29.729 END TEST ublk_recovery 00:23:29.729 ************************************ 00:23:29.729 20:51:20 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:29.729 20:51:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:29.729 20:51:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.729 20:51:20 -- common/autotest_common.sh@10 -- # set +x 00:23:29.729 20:51:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:29.729 20:51:20 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:29.729 20:51:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.729 20:51:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.729 20:51:20 -- common/autotest_common.sh@10 -- # set +x 00:23:29.729 ************************************ 00:23:29.729 START TEST ftl 00:23:29.729 ************************************ 00:23:29.729 20:51:20 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:29.729 * Looking for test storage... 00:23:29.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.729 20:51:20 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.729 20:51:20 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.729 20:51:20 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.729 20:51:20 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.729 20:51:20 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.729 20:51:20 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.729 20:51:20 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.729 20:51:20 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.729 20:51:20 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.729 20:51:20 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.729 20:51:20 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.729 20:51:20 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.729 20:51:20 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.729 20:51:20 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.729 20:51:20 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.729 20:51:20 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:29.729 20:51:20 ftl -- scripts/common.sh@345 -- # : 1 00:23:29.729 20:51:20 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.729 20:51:20 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.729 20:51:20 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:29.729 20:51:20 ftl -- scripts/common.sh@353 -- # local d=1 00:23:29.730 20:51:20 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.730 20:51:20 ftl -- scripts/common.sh@355 -- # echo 1 00:23:29.730 20:51:20 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.730 20:51:20 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:29.730 20:51:20 ftl -- scripts/common.sh@353 -- # local d=2 00:23:29.730 20:51:20 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.730 20:51:20 ftl -- scripts/common.sh@355 -- # echo 2 00:23:29.730 20:51:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.730 20:51:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.730 20:51:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.730 20:51:20 ftl -- scripts/common.sh@368 -- # return 0 00:23:29.730 20:51:20 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.730 20:51:20 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.730 --rc genhtml_branch_coverage=1 00:23:29.730 --rc genhtml_function_coverage=1 00:23:29.730 --rc genhtml_legend=1 00:23:29.730 --rc geninfo_all_blocks=1 00:23:29.730 --rc geninfo_unexecuted_blocks=1 00:23:29.730 00:23:29.730 ' 00:23:29.730 20:51:20 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.730 --rc genhtml_branch_coverage=1 00:23:29.730 --rc genhtml_function_coverage=1 00:23:29.730 --rc genhtml_legend=1 00:23:29.730 --rc geninfo_all_blocks=1 00:23:29.730 --rc geninfo_unexecuted_blocks=1 00:23:29.730 00:23:29.730 ' 00:23:29.730 20:51:20 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.730 --rc genhtml_branch_coverage=1 00:23:29.730 --rc genhtml_function_coverage=1 00:23:29.730 --rc genhtml_legend=1 00:23:29.730 --rc geninfo_all_blocks=1 00:23:29.730 --rc geninfo_unexecuted_blocks=1 00:23:29.730 00:23:29.730 ' 00:23:29.730 20:51:20 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.730 --rc genhtml_branch_coverage=1 00:23:29.730 --rc genhtml_function_coverage=1 00:23:29.730 --rc genhtml_legend=1 00:23:29.730 --rc geninfo_all_blocks=1 00:23:29.730 --rc geninfo_unexecuted_blocks=1 00:23:29.730 00:23:29.730 ' 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:29.730 20:51:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:29.730 20:51:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.730 20:51:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.730 20:51:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:29.730 20:51:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:29.730 20:51:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.730 20:51:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.730 20:51:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.730 20:51:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:29.730 20:51:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:29.730 20:51:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:29.730 20:51:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:29.730 20:51:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.730 20:51:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.730 20:51:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:29.730 20:51:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:29.730 20:51:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:29.730 20:51:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:29.730 20:51:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:29.730 20:51:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:29.730 20:51:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:29.730 20:51:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:29.730 20:51:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:29.730 20:51:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.730 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.730 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.730 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.730 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.730 20:51:21 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77428 00:23:29.730 20:51:21 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:29.730 20:51:21 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77428 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@835 -- # '[' -z 77428 ']' 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.730 20:51:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:29.730 [2024-11-26 20:51:21.397375] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:29.730 [2024-11-26 20:51:21.398153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77428 ] 00:23:29.730 [2024-11-26 20:51:21.602741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.730 [2024-11-26 20:51:21.765768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.730 20:51:22 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.730 20:51:22 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:29.730 20:51:22 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:29.730 20:51:22 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:29.730 20:51:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:29.730 20:51:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@50 -- # break 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@63 -- # break 00:23:29.730 20:51:24 ftl -- ftl/ftl.sh@66 -- # killprocess 77428 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 77428 ']' 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@958 -- # kill -0 77428 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@959 -- # uname 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77428 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.730 killing process with pid 77428 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77428' 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@973 -- # kill 77428 00:23:29.730 20:51:24 ftl -- common/autotest_common.sh@978 -- # wait 77428 00:23:32.262 20:51:27 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:32.262 20:51:27 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:32.262 20:51:27 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:32.262 20:51:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.262 20:51:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:32.262 ************************************ 00:23:32.262 START TEST ftl_fio_basic 00:23:32.262 ************************************ 00:23:32.262 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:32.262 * Looking for test storage... 00:23:32.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.262 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.262 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.262 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:32.521 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.522 --rc genhtml_branch_coverage=1 00:23:32.522 --rc genhtml_function_coverage=1 00:23:32.522 --rc genhtml_legend=1 00:23:32.522 --rc geninfo_all_blocks=1 00:23:32.522 --rc geninfo_unexecuted_blocks=1 00:23:32.522 00:23:32.522 ' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.522 --rc genhtml_branch_coverage=1 00:23:32.522 --rc genhtml_function_coverage=1 00:23:32.522 --rc genhtml_legend=1 00:23:32.522 --rc geninfo_all_blocks=1 00:23:32.522 --rc geninfo_unexecuted_blocks=1 00:23:32.522 00:23:32.522 ' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.522 --rc genhtml_branch_coverage=1 00:23:32.522 --rc genhtml_function_coverage=1 00:23:32.522 --rc genhtml_legend=1 00:23:32.522 --rc geninfo_all_blocks=1 00:23:32.522 --rc geninfo_unexecuted_blocks=1 00:23:32.522 00:23:32.522 ' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.522 --rc genhtml_branch_coverage=1 00:23:32.522 --rc genhtml_function_coverage=1 00:23:32.522 --rc genhtml_legend=1 00:23:32.522 --rc geninfo_all_blocks=1 00:23:32.522 --rc geninfo_unexecuted_blocks=1 00:23:32.522 00:23:32.522 ' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77580 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77580 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77580 ']' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:32.522 20:51:27 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:32.522 [2024-11-26 20:51:27.477306] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:23:32.522 [2024-11-26 20:51:27.477483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77580 ] 00:23:32.781 [2024-11-26 20:51:27.669694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.039 [2024-11-26 20:51:27.784456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.039 [2024-11-26 20:51:27.784497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.039 [2024-11-26 20:51:27.784517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:33.977 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:34.236 20:51:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:34.494 { 00:23:34.494 "name": "nvme0n1", 00:23:34.494 "aliases": [ 00:23:34.494 "6985d882-12da-4d7b-b171-fbfbba826b7b" 00:23:34.494 ], 00:23:34.494 "product_name": "NVMe disk", 00:23:34.494 "block_size": 4096, 00:23:34.494 "num_blocks": 1310720, 00:23:34.494 "uuid": "6985d882-12da-4d7b-b171-fbfbba826b7b", 00:23:34.494 "numa_id": -1, 00:23:34.494 "assigned_rate_limits": { 00:23:34.494 "rw_ios_per_sec": 0, 00:23:34.494 "rw_mbytes_per_sec": 0, 00:23:34.494 "r_mbytes_per_sec": 0, 00:23:34.494 "w_mbytes_per_sec": 0 00:23:34.494 }, 00:23:34.494 "claimed": false, 00:23:34.494 "zoned": false, 00:23:34.494 "supported_io_types": { 00:23:34.494 "read": true, 00:23:34.494 "write": true, 00:23:34.494 "unmap": true, 00:23:34.494 "flush": true, 00:23:34.494 "reset": true, 00:23:34.494 "nvme_admin": true, 00:23:34.494 "nvme_io": true, 00:23:34.494 "nvme_io_md": false, 00:23:34.494 "write_zeroes": true, 00:23:34.494 "zcopy": false, 00:23:34.494 "get_zone_info": false, 00:23:34.494 "zone_management": false, 00:23:34.494 "zone_append": false, 00:23:34.494 "compare": true, 00:23:34.494 "compare_and_write": false, 00:23:34.494 "abort": true, 00:23:34.494 "seek_hole": false, 00:23:34.494 "seek_data": false, 00:23:34.494 "copy": true, 00:23:34.494 "nvme_iov_md": false 00:23:34.494 }, 00:23:34.494 "driver_specific": { 00:23:34.494 "nvme": [ 00:23:34.494 { 00:23:34.494 "pci_address": "0000:00:11.0", 00:23:34.494 "trid": { 00:23:34.494 "trtype": "PCIe", 00:23:34.494 "traddr": "0000:00:11.0" 00:23:34.494 }, 00:23:34.494 "ctrlr_data": { 00:23:34.494 "cntlid": 0, 00:23:34.494 "vendor_id": "0x1b36", 00:23:34.494 "model_number": "QEMU NVMe Ctrl", 00:23:34.494 "serial_number": "12341", 00:23:34.494 "firmware_revision": "8.0.0", 00:23:34.494 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:34.494 "oacs": { 00:23:34.494 "security": 0, 00:23:34.494 "format": 1, 00:23:34.494 "firmware": 0, 00:23:34.494 "ns_manage": 1 00:23:34.494 }, 00:23:34.494 "multi_ctrlr": false, 00:23:34.494 "ana_reporting": false 00:23:34.494 }, 00:23:34.494 "vs": { 00:23:34.494 "nvme_version": "1.4" 00:23:34.494 }, 00:23:34.494 "ns_data": { 00:23:34.494 "id": 1, 00:23:34.494 "can_share": false 00:23:34.494 } 00:23:34.494 } 00:23:34.494 ], 00:23:34.494 "mp_policy": "active_passive" 00:23:34.494 } 00:23:34.494 } 00:23:34.494 ]' 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:34.494 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:34.753 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:34.753 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:35.010 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5deccf31-a533-4b26-8eb9-0f3bf4c815a8 00:23:35.010 20:51:29 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5deccf31-a533-4b26-8eb9-0f3bf4c815a8 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:35.267 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:35.526 { 00:23:35.526 "name": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:35.526 "aliases": [ 00:23:35.526 "lvs/nvme0n1p0" 00:23:35.526 ], 00:23:35.526 "product_name": "Logical Volume", 00:23:35.526 "block_size": 4096, 00:23:35.526 "num_blocks": 26476544, 00:23:35.526 "uuid": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:35.526 "assigned_rate_limits": { 00:23:35.526 "rw_ios_per_sec": 0, 00:23:35.526 "rw_mbytes_per_sec": 0, 00:23:35.526 "r_mbytes_per_sec": 0, 00:23:35.526 "w_mbytes_per_sec": 0 00:23:35.526 }, 00:23:35.526 "claimed": false, 00:23:35.526 "zoned": false, 00:23:35.526 "supported_io_types": { 00:23:35.526 "read": true, 00:23:35.526 "write": true, 00:23:35.526 "unmap": true, 00:23:35.526 "flush": false, 00:23:35.526 "reset": true, 00:23:35.526 "nvme_admin": false, 00:23:35.526 "nvme_io": false, 00:23:35.526 "nvme_io_md": false, 00:23:35.526 "write_zeroes": true, 00:23:35.526 "zcopy": false, 00:23:35.526 "get_zone_info": false, 00:23:35.526 "zone_management": false, 00:23:35.526 "zone_append": false, 00:23:35.526 "compare": false, 00:23:35.526 "compare_and_write": false, 00:23:35.526 "abort": false, 00:23:35.526 "seek_hole": true, 00:23:35.526 "seek_data": true, 00:23:35.526 "copy": false, 00:23:35.526 "nvme_iov_md": false 00:23:35.526 }, 00:23:35.526 "driver_specific": { 00:23:35.526 "lvol": { 00:23:35.526 "lvol_store_uuid": "5deccf31-a533-4b26-8eb9-0f3bf4c815a8", 00:23:35.526 "base_bdev": "nvme0n1", 00:23:35.526 "thin_provision": true, 00:23:35.526 "num_allocated_clusters": 0, 00:23:35.526 "snapshot": false, 00:23:35.526 "clone": false, 00:23:35.526 "esnap_clone": false 00:23:35.526 } 00:23:35.526 } 00:23:35.526 } 00:23:35.526 ]' 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:35.526 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:35.784 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:36.043 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:36.043 { 00:23:36.043 "name": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:36.043 "aliases": [ 00:23:36.043 "lvs/nvme0n1p0" 00:23:36.043 ], 00:23:36.043 "product_name": "Logical Volume", 00:23:36.043 "block_size": 4096, 00:23:36.043 "num_blocks": 26476544, 00:23:36.043 "uuid": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:36.043 "assigned_rate_limits": { 00:23:36.043 "rw_ios_per_sec": 0, 00:23:36.043 "rw_mbytes_per_sec": 0, 00:23:36.043 "r_mbytes_per_sec": 0, 00:23:36.043 "w_mbytes_per_sec": 0 00:23:36.043 }, 00:23:36.043 "claimed": false, 00:23:36.043 "zoned": false, 00:23:36.043 "supported_io_types": { 00:23:36.043 "read": true, 00:23:36.043 "write": true, 00:23:36.043 "unmap": true, 00:23:36.043 "flush": false, 00:23:36.043 "reset": true, 00:23:36.043 "nvme_admin": false, 00:23:36.043 "nvme_io": false, 00:23:36.043 "nvme_io_md": false, 00:23:36.043 "write_zeroes": true, 00:23:36.043 "zcopy": false, 00:23:36.043 "get_zone_info": false, 00:23:36.043 "zone_management": false, 00:23:36.043 "zone_append": false, 00:23:36.043 "compare": false, 00:23:36.043 "compare_and_write": false, 00:23:36.043 "abort": false, 00:23:36.043 "seek_hole": true, 00:23:36.043 "seek_data": true, 00:23:36.043 "copy": false, 00:23:36.043 "nvme_iov_md": false 00:23:36.043 }, 00:23:36.043 "driver_specific": { 00:23:36.043 "lvol": { 00:23:36.043 "lvol_store_uuid": "5deccf31-a533-4b26-8eb9-0f3bf4c815a8", 00:23:36.043 "base_bdev": "nvme0n1", 00:23:36.043 "thin_provision": true, 00:23:36.043 "num_allocated_clusters": 0, 00:23:36.043 "snapshot": false, 00:23:36.043 "clone": false, 00:23:36.043 "esnap_clone": false 00:23:36.043 } 00:23:36.043 } 00:23:36.043 } 00:23:36.043 ]' 00:23:36.043 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:36.043 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:36.043 20:51:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:36.043 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:36.043 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:36.043 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:36.043 20:51:31 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:36.043 20:51:31 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:36.302 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:36.302 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb2a7147-f19a-4fd9-a242-ec4055e8ca2e 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:36.560 { 00:23:36.560 "name": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:36.560 "aliases": [ 00:23:36.560 "lvs/nvme0n1p0" 00:23:36.560 ], 00:23:36.560 "product_name": "Logical Volume", 00:23:36.560 "block_size": 4096, 00:23:36.560 "num_blocks": 26476544, 00:23:36.560 "uuid": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:36.560 "assigned_rate_limits": { 00:23:36.560 "rw_ios_per_sec": 0, 00:23:36.560 "rw_mbytes_per_sec": 0, 00:23:36.560 "r_mbytes_per_sec": 0, 00:23:36.560 "w_mbytes_per_sec": 0 00:23:36.560 }, 00:23:36.560 "claimed": false, 00:23:36.560 "zoned": false, 00:23:36.560 "supported_io_types": { 00:23:36.560 "read": true, 00:23:36.560 "write": true, 00:23:36.560 "unmap": true, 00:23:36.560 "flush": false, 00:23:36.560 "reset": true, 00:23:36.560 "nvme_admin": false, 00:23:36.560 "nvme_io": false, 00:23:36.560 "nvme_io_md": false, 00:23:36.560 "write_zeroes": true, 00:23:36.560 "zcopy": false, 00:23:36.560 "get_zone_info": false, 00:23:36.560 "zone_management": false, 00:23:36.560 "zone_append": false, 00:23:36.560 "compare": false, 00:23:36.560 "compare_and_write": false, 00:23:36.560 "abort": false, 00:23:36.560 "seek_hole": true, 00:23:36.560 "seek_data": true, 00:23:36.560 "copy": false, 00:23:36.560 "nvme_iov_md": false 00:23:36.560 }, 00:23:36.560 "driver_specific": { 00:23:36.560 "lvol": { 00:23:36.560 "lvol_store_uuid": "5deccf31-a533-4b26-8eb9-0f3bf4c815a8", 00:23:36.560 "base_bdev": "nvme0n1", 00:23:36.560 "thin_provision": true, 00:23:36.560 "num_allocated_clusters": 0, 00:23:36.560 "snapshot": false, 00:23:36.560 "clone": false, 00:23:36.560 "esnap_clone": false 00:23:36.560 } 00:23:36.560 } 00:23:36.560 } 00:23:36.560 ]' 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:36.560 20:51:31 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb2a7147-f19a-4fd9-a242-ec4055e8ca2e -c nvc0n1p0 --l2p_dram_limit 60 00:23:36.819 [2024-11-26 20:51:31.693391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.693449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.819 [2024-11-26 20:51:31.693472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.819 [2024-11-26 20:51:31.693484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.819 [2024-11-26 20:51:31.693586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.693601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.819 [2024-11-26 20:51:31.693628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:36.819 [2024-11-26 20:51:31.693641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.819 [2024-11-26 20:51:31.693678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.819 [2024-11-26 20:51:31.694911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.819 [2024-11-26 20:51:31.694950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.694964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.819 [2024-11-26 20:51:31.694980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:23:36.819 [2024-11-26 20:51:31.694992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.819 [2024-11-26 20:51:31.695113] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 684d08a3-49ee-4983-9a65-32aa692be41c 00:23:36.819 [2024-11-26 20:51:31.696748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.696795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:36.819 [2024-11-26 20:51:31.696810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:36.819 [2024-11-26 20:51:31.696825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.819 [2024-11-26 20:51:31.704662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.704704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.819 [2024-11-26 20:51:31.704718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.748 ms 00:23:36.819 [2024-11-26 20:51:31.704740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.819 [2024-11-26 20:51:31.704872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.819 [2024-11-26 20:51:31.704892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.819 [2024-11-26 20:51:31.704905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:36.820 [2024-11-26 20:51:31.704924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.704994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.820 [2024-11-26 20:51:31.705010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.820 [2024-11-26 20:51:31.705022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.820 [2024-11-26 20:51:31.705035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.705097] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.820 [2024-11-26 20:51:31.710895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.820 [2024-11-26 20:51:31.710929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.820 [2024-11-26 20:51:31.710949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.827 ms 00:23:36.820 [2024-11-26 20:51:31.710959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.711008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.820 [2024-11-26 20:51:31.711019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.820 [2024-11-26 20:51:31.711032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:36.820 [2024-11-26 20:51:31.711042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.711111] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:36.820 [2024-11-26 20:51:31.711271] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.820 [2024-11-26 20:51:31.711296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.820 [2024-11-26 20:51:31.711312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.820 [2024-11-26 20:51:31.711329] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.820 [2024-11-26 20:51:31.711343] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.820 [2024-11-26 20:51:31.711358] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.820 [2024-11-26 20:51:31.711369] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.820 [2024-11-26 20:51:31.711384] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.820 [2024-11-26 20:51:31.711395] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.820 [2024-11-26 20:51:31.711412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.820 [2024-11-26 20:51:31.711423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.820 [2024-11-26 20:51:31.711440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:36.820 [2024-11-26 20:51:31.711452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.711553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.820 [2024-11-26 20:51:31.711571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.820 [2024-11-26 20:51:31.711586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:36.820 [2024-11-26 20:51:31.711597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.820 [2024-11-26 20:51:31.711739] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.820 [2024-11-26 20:51:31.711756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.820 [2024-11-26 20:51:31.711770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.820 [2024-11-26 20:51:31.711782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.711797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.820 [2024-11-26 20:51:31.711807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.711820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.820 [2024-11-26 20:51:31.711831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.820 [2024-11-26 20:51:31.711844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.820 [2024-11-26 20:51:31.711855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.820 [2024-11-26 20:51:31.711868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.820 [2024-11-26 20:51:31.711878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.820 [2024-11-26 20:51:31.711891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.820 [2024-11-26 20:51:31.711901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.820 [2024-11-26 20:51:31.711918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.820 [2024-11-26 20:51:31.711929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.711946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.820 [2024-11-26 20:51:31.711957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.820 [2024-11-26 20:51:31.711970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.711980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.820 [2024-11-26 20:51:31.711994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.820 [2024-11-26 20:51:31.712027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.820 [2024-11-26 20:51:31.712062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.820 [2024-11-26 20:51:31.712095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.820 [2024-11-26 20:51:31.712133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.820 [2024-11-26 20:51:31.712173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.820 [2024-11-26 20:51:31.712184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.820 [2024-11-26 20:51:31.712196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.820 [2024-11-26 20:51:31.712207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.820 [2024-11-26 20:51:31.712219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.820 [2024-11-26 20:51:31.712230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.820 [2024-11-26 20:51:31.712253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.820 [2024-11-26 20:51:31.712267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.820 [2024-11-26 20:51:31.712291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.820 [2024-11-26 20:51:31.712302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.820 [2024-11-26 20:51:31.712329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.820 [2024-11-26 20:51:31.712345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.820 [2024-11-26 20:51:31.712355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.820 [2024-11-26 20:51:31.712368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.820 [2024-11-26 20:51:31.712378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.820 [2024-11-26 20:51:31.712391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.820 [2024-11-26 20:51:31.712406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.820 [2024-11-26 20:51:31.712423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.820 [2024-11-26 20:51:31.712436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.820 [2024-11-26 20:51:31.712450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.820 [2024-11-26 20:51:31.712462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.820 [2024-11-26 20:51:31.712476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.820 [2024-11-26 20:51:31.712488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.820 [2024-11-26 20:51:31.712502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.820 [2024-11-26 20:51:31.712513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.820 [2024-11-26 20:51:31.712527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.820 [2024-11-26 20:51:31.712538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.820 [2024-11-26 20:51:31.712554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.820 [2024-11-26 20:51:31.712565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.820 [2024-11-26 20:51:31.712581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.820 [2024-11-26 20:51:31.712592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.820 [2024-11-26 20:51:31.712607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.821 [2024-11-26 20:51:31.712629] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.821 [2024-11-26 20:51:31.712649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.821 [2024-11-26 20:51:31.712661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.821 [2024-11-26 20:51:31.712676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.821 [2024-11-26 20:51:31.712688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.821 [2024-11-26 20:51:31.712702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.821 [2024-11-26 20:51:31.712714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.821 [2024-11-26 20:51:31.712731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.821 [2024-11-26 20:51:31.712742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:23:36.821 [2024-11-26 20:51:31.712758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.821 [2024-11-26 20:51:31.712848] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:36.821 [2024-11-26 20:51:31.712867] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:40.104 [2024-11-26 20:51:34.735874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.735948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:40.104 [2024-11-26 20:51:34.735966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3023.013 ms 00:23:40.104 [2024-11-26 20:51:34.735980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.776415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.776478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.104 [2024-11-26 20:51:34.776497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.144 ms 00:23:40.104 [2024-11-26 20:51:34.776512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.776728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.776756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.104 [2024-11-26 20:51:34.776770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:40.104 [2024-11-26 20:51:34.776788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.839561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.839636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.104 [2024-11-26 20:51:34.839675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.704 ms 00:23:40.104 [2024-11-26 20:51:34.839690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.839754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.839770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.104 [2024-11-26 20:51:34.839783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:40.104 [2024-11-26 20:51:34.839797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.840338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.840359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.104 [2024-11-26 20:51:34.840374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:40.104 [2024-11-26 20:51:34.840388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.840527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.840547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.104 [2024-11-26 20:51:34.840559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:40.104 [2024-11-26 20:51:34.840576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.862924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.862981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.104 [2024-11-26 20:51:34.862997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.313 ms 00:23:40.104 [2024-11-26 20:51:34.863011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.876388] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:40.104 [2024-11-26 20:51:34.893385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.893449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.104 [2024-11-26 20:51:34.893474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.228 ms 00:23:40.104 [2024-11-26 20:51:34.893485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.965531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.965609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:40.104 [2024-11-26 20:51:34.965649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.975 ms 00:23:40.104 [2024-11-26 20:51:34.965660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:34.965903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:34.965919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.104 [2024-11-26 20:51:34.965937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:23:40.104 [2024-11-26 20:51:34.965948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:35.003690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:35.003739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:40.104 [2024-11-26 20:51:35.003764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.668 ms 00:23:40.104 [2024-11-26 20:51:35.003776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:35.041368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:35.041410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:40.104 [2024-11-26 20:51:35.041428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.534 ms 00:23:40.104 [2024-11-26 20:51:35.041439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.104 [2024-11-26 20:51:35.042232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.104 [2024-11-26 20:51:35.042262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.104 [2024-11-26 20:51:35.042277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:23:40.104 [2024-11-26 20:51:35.042289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.161415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.161468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:40.363 [2024-11-26 20:51:35.161496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.029 ms 00:23:40.363 [2024-11-26 20:51:35.161507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.202516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.202567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:40.363 [2024-11-26 20:51:35.202587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.858 ms 00:23:40.363 [2024-11-26 20:51:35.202598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.242075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.242123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:40.363 [2024-11-26 20:51:35.242141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.403 ms 00:23:40.363 [2024-11-26 20:51:35.242152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.281054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.281102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.363 [2024-11-26 20:51:35.281121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.840 ms 00:23:40.363 [2024-11-26 20:51:35.281132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.281194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.281207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.363 [2024-11-26 20:51:35.281229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.363 [2024-11-26 20:51:35.281239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.281385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.363 [2024-11-26 20:51:35.281401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.363 [2024-11-26 20:51:35.281415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:40.363 [2024-11-26 20:51:35.281425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.363 [2024-11-26 20:51:35.282678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3588.740 ms, result 0 00:23:40.363 { 00:23:40.363 "name": "ftl0", 00:23:40.363 "uuid": "684d08a3-49ee-4983-9a65-32aa692be41c" 00:23:40.363 } 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:40.363 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:40.622 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:40.880 [ 00:23:40.880 { 00:23:40.880 "name": "ftl0", 00:23:40.880 "aliases": [ 00:23:40.880 "684d08a3-49ee-4983-9a65-32aa692be41c" 00:23:40.880 ], 00:23:40.880 "product_name": "FTL disk", 00:23:40.880 "block_size": 4096, 00:23:40.880 "num_blocks": 20971520, 00:23:40.880 "uuid": "684d08a3-49ee-4983-9a65-32aa692be41c", 00:23:40.880 "assigned_rate_limits": { 00:23:40.880 "rw_ios_per_sec": 0, 00:23:40.880 "rw_mbytes_per_sec": 0, 00:23:40.880 "r_mbytes_per_sec": 0, 00:23:40.880 "w_mbytes_per_sec": 0 00:23:40.880 }, 00:23:40.880 "claimed": false, 00:23:40.880 "zoned": false, 00:23:40.880 "supported_io_types": { 00:23:40.880 "read": true, 00:23:40.880 "write": true, 00:23:40.880 "unmap": true, 00:23:40.880 "flush": true, 00:23:40.880 "reset": false, 00:23:40.880 "nvme_admin": false, 00:23:40.880 "nvme_io": false, 00:23:40.880 "nvme_io_md": false, 00:23:40.880 "write_zeroes": true, 00:23:40.880 "zcopy": false, 00:23:40.880 "get_zone_info": false, 00:23:40.880 "zone_management": false, 00:23:40.880 "zone_append": false, 00:23:40.880 "compare": false, 00:23:40.880 "compare_and_write": false, 00:23:40.880 "abort": false, 00:23:40.880 "seek_hole": false, 00:23:40.880 "seek_data": false, 00:23:40.880 "copy": false, 00:23:40.881 "nvme_iov_md": false 00:23:40.881 }, 00:23:40.881 "driver_specific": { 00:23:40.881 "ftl": { 00:23:40.881 "base_bdev": "fb2a7147-f19a-4fd9-a242-ec4055e8ca2e", 00:23:40.881 "cache": "nvc0n1p0" 00:23:40.881 } 00:23:40.881 } 00:23:40.881 } 00:23:40.881 ] 00:23:40.881 20:51:35 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:40.881 20:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:40.881 20:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:41.138 20:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:41.138 20:51:35 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:41.138 [2024-11-26 20:51:36.091386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.138 [2024-11-26 20:51:36.091454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:41.138 [2024-11-26 20:51:36.091472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:41.138 [2024-11-26 20:51:36.091489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.138 [2024-11-26 20:51:36.091554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:41.138 [2024-11-26 20:51:36.096137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.138 [2024-11-26 20:51:36.096180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:41.138 [2024-11-26 20:51:36.096215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.555 ms 00:23:41.138 [2024-11-26 20:51:36.096228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.138 [2024-11-26 20:51:36.096838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.138 [2024-11-26 20:51:36.096866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:41.138 [2024-11-26 20:51:36.096882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:23:41.138 [2024-11-26 20:51:36.096894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.138 [2024-11-26 20:51:36.099707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.138 [2024-11-26 20:51:36.099740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:41.138 [2024-11-26 20:51:36.099758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:23:41.138 [2024-11-26 20:51:36.099771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.138 [2024-11-26 20:51:36.105264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.138 [2024-11-26 20:51:36.105319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:41.138 [2024-11-26 20:51:36.105341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.454 ms 00:23:41.138 [2024-11-26 20:51:36.105351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.397 [2024-11-26 20:51:36.147500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.397 [2024-11-26 20:51:36.147550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:41.397 [2024-11-26 20:51:36.147590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.062 ms 00:23:41.397 [2024-11-26 20:51:36.147602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.397 [2024-11-26 20:51:36.172142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.172192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:41.398 [2024-11-26 20:51:36.172232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:23:41.398 [2024-11-26 20:51:36.172244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.172499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.172519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:41.398 [2024-11-26 20:51:36.172535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:23:41.398 [2024-11-26 20:51:36.172546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.209941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.209999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:41.398 [2024-11-26 20:51:36.210019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.361 ms 00:23:41.398 [2024-11-26 20:51:36.210030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.248110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.248150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:41.398 [2024-11-26 20:51:36.248184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.020 ms 00:23:41.398 [2024-11-26 20:51:36.248195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.286824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.286866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:41.398 [2024-11-26 20:51:36.286883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.567 ms 00:23:41.398 [2024-11-26 20:51:36.286909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.325075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.398 [2024-11-26 20:51:36.325117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.398 [2024-11-26 20:51:36.325134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.022 ms 00:23:41.398 [2024-11-26 20:51:36.325161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.398 [2024-11-26 20:51:36.325214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.398 [2024-11-26 20:51:36.325234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.325995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.398 [2024-11-26 20:51:36.326157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.399 [2024-11-26 20:51:36.326648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.399 [2024-11-26 20:51:36.326662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 684d08a3-49ee-4983-9a65-32aa692be41c 00:23:41.399 [2024-11-26 20:51:36.326675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:41.399 [2024-11-26 20:51:36.326691] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:41.399 [2024-11-26 20:51:36.326705] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:41.399 [2024-11-26 20:51:36.326719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:41.399 [2024-11-26 20:51:36.326731] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.399 [2024-11-26 20:51:36.326745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.399 [2024-11-26 20:51:36.326756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.399 [2024-11-26 20:51:36.326769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.399 [2024-11-26 20:51:36.326779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.399 [2024-11-26 20:51:36.326793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.399 [2024-11-26 20:51:36.326804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.399 [2024-11-26 20:51:36.326818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:23:41.399 [2024-11-26 20:51:36.326829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.399 [2024-11-26 20:51:36.349271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.399 [2024-11-26 20:51:36.349311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:41.399 [2024-11-26 20:51:36.349345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.369 ms 00:23:41.399 [2024-11-26 20:51:36.349357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.399 [2024-11-26 20:51:36.349995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.399 [2024-11-26 20:51:36.350016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:41.399 [2024-11-26 20:51:36.350032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:23:41.399 [2024-11-26 20:51:36.350043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.657 [2024-11-26 20:51:36.425017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.657 [2024-11-26 20:51:36.425071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.657 [2024-11-26 20:51:36.425090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.658 [2024-11-26 20:51:36.425102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.658 [2024-11-26 20:51:36.425185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.658 [2024-11-26 20:51:36.425196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.658 [2024-11-26 20:51:36.425210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.658 [2024-11-26 20:51:36.425220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.658 [2024-11-26 20:51:36.425358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.658 [2024-11-26 20:51:36.425377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.658 [2024-11-26 20:51:36.425390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.658 [2024-11-26 20:51:36.425400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.658 [2024-11-26 20:51:36.425436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.658 [2024-11-26 20:51:36.425447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.658 [2024-11-26 20:51:36.425460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.658 [2024-11-26 20:51:36.425470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.658 [2024-11-26 20:51:36.564991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.658 [2024-11-26 20:51:36.565045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.658 [2024-11-26 20:51:36.565064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.658 [2024-11-26 20:51:36.565075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.670451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.670514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.916 [2024-11-26 20:51:36.670533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.916 [2024-11-26 20:51:36.670544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.670687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.670701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.916 [2024-11-26 20:51:36.670718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.916 [2024-11-26 20:51:36.670728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.670812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.670825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.916 [2024-11-26 20:51:36.670838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.916 [2024-11-26 20:51:36.670849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.670989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.671003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.916 [2024-11-26 20:51:36.671020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.916 [2024-11-26 20:51:36.671030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.671117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.671131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:41.916 [2024-11-26 20:51:36.671144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.916 [2024-11-26 20:51:36.671155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.916 [2024-11-26 20:51:36.671228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.916 [2024-11-26 20:51:36.671241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.917 [2024-11-26 20:51:36.671256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.917 [2024-11-26 20:51:36.671269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.917 [2024-11-26 20:51:36.671356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.917 [2024-11-26 20:51:36.671371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.917 [2024-11-26 20:51:36.671384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.917 [2024-11-26 20:51:36.671401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.917 [2024-11-26 20:51:36.671608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 580.193 ms, result 0 00:23:41.917 true 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77580 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77580 ']' 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77580 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77580 00:23:41.917 killing process with pid 77580 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77580' 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77580 00:23:41.917 20:51:36 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77580 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:47.181 20:51:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:47.182 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:47.440 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:47.440 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:47.440 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:47.440 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:47.440 20:51:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:47.699 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:47.699 fio-3.35 00:23:47.699 Starting 1 thread 00:23:52.997 00:23:52.997 test: (groupid=0, jobs=1): err= 0: pid=77796: Tue Nov 26 20:51:47 2024 00:23:52.997 read: IOPS=1032, BW=68.5MiB/s (71.9MB/s)(255MiB/3714msec) 00:23:52.997 slat (nsec): min=4328, max=32547, avg=6845.25, stdev=2889.00 00:23:52.997 clat (usec): min=305, max=796, avg=424.85, stdev=58.12 00:23:52.997 lat (usec): min=311, max=803, avg=431.69, stdev=58.79 00:23:52.997 clat percentiles (usec): 00:23:52.997 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 367], 00:23:52.997 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:23:52.997 | 70.00th=[ 449], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 523], 00:23:52.997 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 791], 00:23:52.997 | 99.99th=[ 799] 00:23:52.997 write: IOPS=1039, BW=69.0MiB/s (72.4MB/s)(256MiB/3710msec); 0 zone resets 00:23:52.997 slat (nsec): min=16447, max=81977, avg=22174.63, stdev=5104.20 00:23:52.997 clat (usec): min=338, max=1151, avg=499.53, stdev=69.55 00:23:52.997 lat (usec): min=359, max=1173, avg=521.70, stdev=70.00 00:23:52.997 clat percentiles (usec): 00:23:52.997 | 1.00th=[ 375], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 441], 00:23:52.997 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 498], 60.00th=[ 510], 00:23:52.997 | 70.00th=[ 523], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 603], 00:23:52.997 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 947], 99.95th=[ 1029], 00:23:52.997 | 99.99th=[ 1156] 00:23:52.997 bw ( KiB/s): min=67728, max=74392, per=100.00%, avg=70720.00, stdev=2384.21, samples=7 00:23:52.997 iops : min= 996, max= 1094, avg=1040.00, stdev=35.06, samples=7 00:23:52.997 lat (usec) : 500=71.18%, 750=28.18%, 1000=0.61% 00:23:52.997 lat (msec) : 2=0.03% 00:23:52.997 cpu : usr=99.22%, sys=0.08%, ctx=7, majf=0, minf=1169 00:23:52.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.997 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:52.997 00:23:52.997 Run status group 0 (all jobs): 00:23:52.997 READ: bw=68.5MiB/s (71.9MB/s), 68.5MiB/s-68.5MiB/s (71.9MB/s-71.9MB/s), io=255MiB (267MB), run=3714-3714msec 00:23:52.997 WRITE: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=256MiB (269MB), run=3710-3710msec 00:23:54.896 ----------------------------------------------------- 00:23:54.896 Suppressions used: 00:23:54.896 count bytes template 00:23:54.896 1 5 /usr/src/fio/parse.c 00:23:54.896 1 8 libtcmalloc_minimal.so 00:23:54.896 1 904 libcrypto.so 00:23:54.896 ----------------------------------------------------- 00:23:54.896 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:54.896 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:54.897 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:55.154 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:55.154 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:55.154 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:55.154 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:55.154 20:51:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:55.411 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:55.411 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:55.411 fio-3.35 00:23:55.411 Starting 2 threads 00:24:27.474 00:24:27.474 first_half: (groupid=0, jobs=1): err= 0: pid=77906: Tue Nov 26 20:52:18 2024 00:24:27.474 read: IOPS=2467, BW=9869KiB/s (10.1MB/s)(255MiB/26442msec) 00:24:27.474 slat (nsec): min=3627, max=41955, avg=6396.83, stdev=1867.48 00:24:27.474 clat (usec): min=858, max=372461, avg=37767.21, stdev=21074.66 00:24:27.474 lat (usec): min=866, max=372465, avg=37773.60, stdev=21074.85 00:24:27.474 clat percentiles (msec): 00:24:27.474 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 33], 20.00th=[ 34], 00:24:27.474 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:24:27.474 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 51], 00:24:27.474 | 99.00th=[ 150], 99.50th=[ 174], 99.90th=[ 288], 99.95th=[ 317], 00:24:27.474 | 99.99th=[ 363] 00:24:27.474 write: IOPS=2881, BW=11.3MiB/s (11.8MB/s)(256MiB/22745msec); 0 zone resets 00:24:27.474 slat (usec): min=4, max=1150, avg= 8.63, stdev= 9.46 00:24:27.474 clat (usec): min=382, max=108381, avg=14024.52, stdev=23620.21 00:24:27.474 lat (usec): min=408, max=108388, avg=14033.15, stdev=23620.36 00:24:27.474 clat percentiles (usec): 00:24:27.474 | 1.00th=[ 840], 5.00th=[ 1106], 10.00th=[ 1270], 20.00th=[ 1582], 00:24:27.474 | 30.00th=[ 2147], 40.00th=[ 4047], 50.00th=[ 5473], 60.00th=[ 6652], 00:24:27.474 | 70.00th=[ 9372], 80.00th=[ 14091], 90.00th=[ 41157], 95.00th=[ 83362], 00:24:27.474 | 99.00th=[ 96994], 99.50th=[100140], 99.90th=[106431], 99.95th=[107480], 00:24:27.474 | 99.99th=[107480] 00:24:27.474 bw ( KiB/s): min= 928, max=38856, per=78.43%, avg=18078.90, stdev=12517.29, samples=29 00:24:27.474 iops : min= 232, max= 9714, avg=4519.72, stdev=3129.32, samples=29 00:24:27.474 lat (usec) : 500=0.02%, 750=0.19%, 1000=1.18% 00:24:27.474 lat (msec) : 2=13.07%, 4=5.71%, 10=17.01%, 20=8.07%, 50=47.67% 00:24:27.474 lat (msec) : 100=5.63%, 250=1.39%, 500=0.07% 00:24:27.474 cpu : usr=99.17%, sys=0.20%, ctx=50, majf=0, minf=5555 00:24:27.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:27.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.474 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.474 issued rwts: total=65238,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.474 second_half: (groupid=0, jobs=1): err= 0: pid=77907: Tue Nov 26 20:52:18 2024 00:24:27.474 read: IOPS=2478, BW=9915KiB/s (10.2MB/s)(255MiB/26295msec) 00:24:27.474 slat (nsec): min=3635, max=39878, avg=6438.57, stdev=1875.50 00:24:27.474 clat (usec): min=821, max=387556, avg=38378.05, stdev=19853.73 00:24:27.474 lat (usec): min=828, max=387563, avg=38384.49, stdev=19853.90 00:24:27.474 clat percentiles (msec): 00:24:27.474 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:24:27.474 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 35], 00:24:27.474 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 51], 00:24:27.475 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 205], 99.95th=[ 255], 00:24:27.475 | 99.99th=[ 380] 00:24:27.475 write: IOPS=3167, BW=12.4MiB/s (13.0MB/s)(256MiB/20691msec); 0 zone resets 00:24:27.475 slat (usec): min=4, max=595, avg= 8.55, stdev= 4.75 00:24:27.475 clat (usec): min=414, max=108764, avg=13164.70, stdev=23433.34 00:24:27.475 lat (usec): min=424, max=108770, avg=13173.26, stdev=23433.47 00:24:27.475 clat percentiles (usec): 00:24:27.475 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1598], 00:24:27.475 | 30.00th=[ 1893], 40.00th=[ 3195], 50.00th=[ 4752], 60.00th=[ 6128], 00:24:27.475 | 70.00th=[ 8455], 80.00th=[ 13566], 90.00th=[ 38011], 95.00th=[ 83362], 00:24:27.475 | 99.00th=[ 95945], 99.50th=[100140], 99.90th=[105382], 99.95th=[106431], 00:24:27.475 | 99.99th=[108528] 00:24:27.475 bw ( KiB/s): min= 8, max=60728, per=84.25%, avg=19420.52, stdev=15392.79, samples=27 00:24:27.475 iops : min= 2, max=15182, avg=4855.11, stdev=3848.20, samples=27 00:24:27.475 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.50% 00:24:27.475 lat (msec) : 2=15.90%, 4=7.18%, 10=12.86%, 20=9.01%, 50=47.45% 00:24:27.475 lat (msec) : 100=5.56%, 250=1.47%, 500=0.03% 00:24:27.475 cpu : usr=99.26%, sys=0.17%, ctx=40, majf=0, minf=5548 00:24:27.475 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:27.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.475 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.475 issued rwts: total=65178,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.475 00:24:27.475 Run status group 0 (all jobs): 00:24:27.475 READ: bw=19.3MiB/s (20.2MB/s), 9869KiB/s-9915KiB/s (10.1MB/s-10.2MB/s), io=509MiB (534MB), run=26295-26442msec 00:24:27.475 WRITE: bw=22.5MiB/s (23.6MB/s), 11.3MiB/s-12.4MiB/s (11.8MB/s-13.0MB/s), io=512MiB (537MB), run=20691-22745msec 00:24:27.475 ----------------------------------------------------- 00:24:27.475 Suppressions used: 00:24:27.475 count bytes template 00:24:27.475 2 10 /usr/src/fio/parse.c 00:24:27.475 1 96 /usr/src/fio/iolog.c 00:24:27.475 1 8 libtcmalloc_minimal.so 00:24:27.475 1 904 libcrypto.so 00:24:27.475 ----------------------------------------------------- 00:24:27.475 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:27.475 20:52:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:27.475 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:27.475 fio-3.35 00:24:27.475 Starting 1 thread 00:24:42.347 00:24:42.347 test: (groupid=0, jobs=1): err= 0: pid=78244: Tue Nov 26 20:52:36 2024 00:24:42.347 read: IOPS=7103, BW=27.7MiB/s (29.1MB/s)(255MiB/9179msec) 00:24:42.347 slat (nsec): min=3630, max=51029, avg=5942.75, stdev=1799.63 00:24:42.347 clat (usec): min=749, max=40584, avg=18010.18, stdev=1599.93 00:24:42.347 lat (usec): min=753, max=40589, avg=18016.13, stdev=1600.02 00:24:42.347 clat percentiles (usec): 00:24:42.347 | 1.00th=[16581], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:24:42.347 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:24:42.347 | 70.00th=[17957], 80.00th=[18482], 90.00th=[20317], 95.00th=[21103], 00:24:42.347 | 99.00th=[23200], 99.50th=[25035], 99.90th=[30278], 99.95th=[35390], 00:24:42.347 | 99.99th=[39584] 00:24:42.347 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(256MiB/5114msec); 0 zone resets 00:24:42.347 slat (usec): min=4, max=612, avg= 8.70, stdev= 5.86 00:24:42.347 clat (usec): min=652, max=57875, avg=9935.76, stdev=12501.26 00:24:42.347 lat (usec): min=660, max=57883, avg=9944.46, stdev=12501.30 00:24:42.347 clat percentiles (usec): 00:24:42.347 | 1.00th=[ 914], 5.00th=[ 1074], 10.00th=[ 1188], 20.00th=[ 1369], 00:24:42.347 | 30.00th=[ 1582], 40.00th=[ 2024], 50.00th=[ 6652], 60.00th=[ 7570], 00:24:42.347 | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[35914], 95.00th=[38536], 00:24:42.347 | 99.00th=[44303], 99.50th=[45351], 99.90th=[47449], 99.95th=[48497], 00:24:42.347 | 99.99th=[52691] 00:24:42.347 bw ( KiB/s): min= 8936, max=69152, per=92.96%, avg=47652.45, stdev=15509.99, samples=11 00:24:42.347 iops : min= 2234, max=17288, avg=11913.09, stdev=3877.49, samples=11 00:24:42.347 lat (usec) : 750=0.02%, 1000=1.32% 00:24:42.347 lat (msec) : 2=18.64%, 4=1.08%, 10=18.88%, 20=46.10%, 50=13.95% 00:24:42.347 lat (msec) : 100=0.01% 00:24:42.347 cpu : usr=98.95%, sys=0.24%, ctx=22, majf=0, minf=5565 00:24:42.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:42.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.347 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:42.347 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:42.347 00:24:42.347 Run status group 0 (all jobs): 00:24:42.347 READ: bw=27.7MiB/s (29.1MB/s), 27.7MiB/s-27.7MiB/s (29.1MB/s-29.1MB/s), io=255MiB (267MB), run=9179-9179msec 00:24:42.347 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=256MiB (268MB), run=5114-5114msec 00:24:44.247 ----------------------------------------------------- 00:24:44.247 Suppressions used: 00:24:44.247 count bytes template 00:24:44.247 1 5 /usr/src/fio/parse.c 00:24:44.247 2 192 /usr/src/fio/iolog.c 00:24:44.247 1 8 libtcmalloc_minimal.so 00:24:44.247 1 904 libcrypto.so 00:24:44.247 ----------------------------------------------------- 00:24:44.247 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:44.247 Remove shared memory files 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58022 /dev/shm/spdk_tgt_trace.pid76480 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:44.247 00:24:44.247 real 1m11.985s 00:24:44.247 user 2m35.855s 00:24:44.247 sys 0m4.247s 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.247 20:52:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:44.247 ************************************ 00:24:44.247 END TEST ftl_fio_basic 00:24:44.247 ************************************ 00:24:44.247 20:52:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:44.247 20:52:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:44.247 20:52:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.247 20:52:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:44.247 ************************************ 00:24:44.247 START TEST ftl_bdevperf 00:24:44.247 ************************************ 00:24:44.247 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:44.505 * Looking for test storage... 00:24:44.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.505 --rc genhtml_branch_coverage=1 00:24:44.505 --rc genhtml_function_coverage=1 00:24:44.505 --rc genhtml_legend=1 00:24:44.505 --rc geninfo_all_blocks=1 00:24:44.505 --rc geninfo_unexecuted_blocks=1 00:24:44.505 00:24:44.505 ' 00:24:44.505 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.505 --rc genhtml_branch_coverage=1 00:24:44.505 --rc genhtml_function_coverage=1 00:24:44.505 --rc genhtml_legend=1 00:24:44.505 --rc geninfo_all_blocks=1 00:24:44.506 --rc geninfo_unexecuted_blocks=1 00:24:44.506 00:24:44.506 ' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.506 --rc genhtml_branch_coverage=1 00:24:44.506 --rc genhtml_function_coverage=1 00:24:44.506 --rc genhtml_legend=1 00:24:44.506 --rc geninfo_all_blocks=1 00:24:44.506 --rc geninfo_unexecuted_blocks=1 00:24:44.506 00:24:44.506 ' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.506 --rc genhtml_branch_coverage=1 00:24:44.506 --rc genhtml_function_coverage=1 00:24:44.506 --rc genhtml_legend=1 00:24:44.506 --rc geninfo_all_blocks=1 00:24:44.506 --rc geninfo_unexecuted_blocks=1 00:24:44.506 00:24:44.506 ' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78488 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78488 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78488 ']' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.506 20:52:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.764 [2024-11-26 20:52:39.508195] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:24:44.764 [2024-11-26 20:52:39.508371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78488 ] 00:24:44.764 [2024-11-26 20:52:39.710917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.022 [2024-11-26 20:52:39.864798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:45.588 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:45.846 20:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:46.104 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:46.104 { 00:24:46.104 "name": "nvme0n1", 00:24:46.104 "aliases": [ 00:24:46.104 "f6343a8c-eb39-42dd-aec2-11fc73a77722" 00:24:46.104 ], 00:24:46.104 "product_name": "NVMe disk", 00:24:46.104 "block_size": 4096, 00:24:46.104 "num_blocks": 1310720, 00:24:46.104 "uuid": "f6343a8c-eb39-42dd-aec2-11fc73a77722", 00:24:46.104 "numa_id": -1, 00:24:46.104 "assigned_rate_limits": { 00:24:46.104 "rw_ios_per_sec": 0, 00:24:46.104 "rw_mbytes_per_sec": 0, 00:24:46.104 "r_mbytes_per_sec": 0, 00:24:46.104 "w_mbytes_per_sec": 0 00:24:46.104 }, 00:24:46.104 "claimed": true, 00:24:46.104 "claim_type": "read_many_write_one", 00:24:46.104 "zoned": false, 00:24:46.104 "supported_io_types": { 00:24:46.104 "read": true, 00:24:46.104 "write": true, 00:24:46.104 "unmap": true, 00:24:46.104 "flush": true, 00:24:46.104 "reset": true, 00:24:46.104 "nvme_admin": true, 00:24:46.104 "nvme_io": true, 00:24:46.104 "nvme_io_md": false, 00:24:46.104 "write_zeroes": true, 00:24:46.104 "zcopy": false, 00:24:46.104 "get_zone_info": false, 00:24:46.104 "zone_management": false, 00:24:46.104 "zone_append": false, 00:24:46.104 "compare": true, 00:24:46.104 "compare_and_write": false, 00:24:46.104 "abort": true, 00:24:46.104 "seek_hole": false, 00:24:46.104 "seek_data": false, 00:24:46.104 "copy": true, 00:24:46.104 "nvme_iov_md": false 00:24:46.104 }, 00:24:46.104 "driver_specific": { 00:24:46.104 "nvme": [ 00:24:46.104 { 00:24:46.104 "pci_address": "0000:00:11.0", 00:24:46.104 "trid": { 00:24:46.104 "trtype": "PCIe", 00:24:46.104 "traddr": "0000:00:11.0" 00:24:46.104 }, 00:24:46.104 "ctrlr_data": { 00:24:46.104 "cntlid": 0, 00:24:46.104 "vendor_id": "0x1b36", 00:24:46.104 "model_number": "QEMU NVMe Ctrl", 00:24:46.104 "serial_number": "12341", 00:24:46.104 "firmware_revision": "8.0.0", 00:24:46.105 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:46.105 "oacs": { 00:24:46.105 "security": 0, 00:24:46.105 "format": 1, 00:24:46.105 "firmware": 0, 00:24:46.105 "ns_manage": 1 00:24:46.105 }, 00:24:46.105 "multi_ctrlr": false, 00:24:46.105 "ana_reporting": false 00:24:46.105 }, 00:24:46.105 "vs": { 00:24:46.105 "nvme_version": "1.4" 00:24:46.105 }, 00:24:46.105 "ns_data": { 00:24:46.105 "id": 1, 00:24:46.105 "can_share": false 00:24:46.105 } 00:24:46.105 } 00:24:46.105 ], 00:24:46.105 "mp_policy": "active_passive" 00:24:46.105 } 00:24:46.105 } 00:24:46.105 ]' 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:46.363 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:46.620 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5deccf31-a533-4b26-8eb9-0f3bf4c815a8 00:24:46.620 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:46.621 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5deccf31-a533-4b26-8eb9-0f3bf4c815a8 00:24:46.621 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:46.908 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=6c01375a-c7cb-48f7-bbb6-5061bced217a 00:24:46.908 20:52:41 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c01375a-c7cb-48f7-bbb6-5061bced217a 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:47.166 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.423 { 00:24:47.423 "name": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:47.423 "aliases": [ 00:24:47.423 "lvs/nvme0n1p0" 00:24:47.423 ], 00:24:47.423 "product_name": "Logical Volume", 00:24:47.423 "block_size": 4096, 00:24:47.423 "num_blocks": 26476544, 00:24:47.423 "uuid": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:47.423 "assigned_rate_limits": { 00:24:47.423 "rw_ios_per_sec": 0, 00:24:47.423 "rw_mbytes_per_sec": 0, 00:24:47.423 "r_mbytes_per_sec": 0, 00:24:47.423 "w_mbytes_per_sec": 0 00:24:47.423 }, 00:24:47.423 "claimed": false, 00:24:47.423 "zoned": false, 00:24:47.423 "supported_io_types": { 00:24:47.423 "read": true, 00:24:47.423 "write": true, 00:24:47.423 "unmap": true, 00:24:47.423 "flush": false, 00:24:47.423 "reset": true, 00:24:47.423 "nvme_admin": false, 00:24:47.423 "nvme_io": false, 00:24:47.423 "nvme_io_md": false, 00:24:47.423 "write_zeroes": true, 00:24:47.423 "zcopy": false, 00:24:47.423 "get_zone_info": false, 00:24:47.423 "zone_management": false, 00:24:47.423 "zone_append": false, 00:24:47.423 "compare": false, 00:24:47.423 "compare_and_write": false, 00:24:47.423 "abort": false, 00:24:47.423 "seek_hole": true, 00:24:47.423 "seek_data": true, 00:24:47.423 "copy": false, 00:24:47.423 "nvme_iov_md": false 00:24:47.423 }, 00:24:47.423 "driver_specific": { 00:24:47.423 "lvol": { 00:24:47.423 "lvol_store_uuid": "6c01375a-c7cb-48f7-bbb6-5061bced217a", 00:24:47.423 "base_bdev": "nvme0n1", 00:24:47.423 "thin_provision": true, 00:24:47.423 "num_allocated_clusters": 0, 00:24:47.423 "snapshot": false, 00:24:47.423 "clone": false, 00:24:47.423 "esnap_clone": false 00:24:47.423 } 00:24:47.423 } 00:24:47.423 } 00:24:47.423 ]' 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:47.423 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.996 { 00:24:47.996 "name": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:47.996 "aliases": [ 00:24:47.996 "lvs/nvme0n1p0" 00:24:47.996 ], 00:24:47.996 "product_name": "Logical Volume", 00:24:47.996 "block_size": 4096, 00:24:47.996 "num_blocks": 26476544, 00:24:47.996 "uuid": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:47.996 "assigned_rate_limits": { 00:24:47.996 "rw_ios_per_sec": 0, 00:24:47.996 "rw_mbytes_per_sec": 0, 00:24:47.996 "r_mbytes_per_sec": 0, 00:24:47.996 "w_mbytes_per_sec": 0 00:24:47.996 }, 00:24:47.996 "claimed": false, 00:24:47.996 "zoned": false, 00:24:47.996 "supported_io_types": { 00:24:47.996 "read": true, 00:24:47.996 "write": true, 00:24:47.996 "unmap": true, 00:24:47.996 "flush": false, 00:24:47.996 "reset": true, 00:24:47.996 "nvme_admin": false, 00:24:47.996 "nvme_io": false, 00:24:47.996 "nvme_io_md": false, 00:24:47.996 "write_zeroes": true, 00:24:47.996 "zcopy": false, 00:24:47.996 "get_zone_info": false, 00:24:47.996 "zone_management": false, 00:24:47.996 "zone_append": false, 00:24:47.996 "compare": false, 00:24:47.996 "compare_and_write": false, 00:24:47.996 "abort": false, 00:24:47.996 "seek_hole": true, 00:24:47.996 "seek_data": true, 00:24:47.996 "copy": false, 00:24:47.996 "nvme_iov_md": false 00:24:47.996 }, 00:24:47.996 "driver_specific": { 00:24:47.996 "lvol": { 00:24:47.996 "lvol_store_uuid": "6c01375a-c7cb-48f7-bbb6-5061bced217a", 00:24:47.996 "base_bdev": "nvme0n1", 00:24:47.996 "thin_provision": true, 00:24:47.996 "num_allocated_clusters": 0, 00:24:47.996 "snapshot": false, 00:24:47.996 "clone": false, 00:24:47.996 "esnap_clone": false 00:24:47.996 } 00:24:47.996 } 00:24:47.996 } 00:24:47.996 ]' 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:47.996 20:52:42 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:48.253 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dc8b055f-03b4-4461-8dfc-9277f7591edd 00:24:48.510 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:48.510 { 00:24:48.510 "name": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:48.510 "aliases": [ 00:24:48.510 "lvs/nvme0n1p0" 00:24:48.510 ], 00:24:48.510 "product_name": "Logical Volume", 00:24:48.510 "block_size": 4096, 00:24:48.510 "num_blocks": 26476544, 00:24:48.510 "uuid": "dc8b055f-03b4-4461-8dfc-9277f7591edd", 00:24:48.510 "assigned_rate_limits": { 00:24:48.510 "rw_ios_per_sec": 0, 00:24:48.510 "rw_mbytes_per_sec": 0, 00:24:48.510 "r_mbytes_per_sec": 0, 00:24:48.510 "w_mbytes_per_sec": 0 00:24:48.510 }, 00:24:48.510 "claimed": false, 00:24:48.510 "zoned": false, 00:24:48.510 "supported_io_types": { 00:24:48.510 "read": true, 00:24:48.510 "write": true, 00:24:48.510 "unmap": true, 00:24:48.510 "flush": false, 00:24:48.510 "reset": true, 00:24:48.510 "nvme_admin": false, 00:24:48.510 "nvme_io": false, 00:24:48.510 "nvme_io_md": false, 00:24:48.510 "write_zeroes": true, 00:24:48.510 "zcopy": false, 00:24:48.510 "get_zone_info": false, 00:24:48.510 "zone_management": false, 00:24:48.510 "zone_append": false, 00:24:48.510 "compare": false, 00:24:48.510 "compare_and_write": false, 00:24:48.510 "abort": false, 00:24:48.510 "seek_hole": true, 00:24:48.510 "seek_data": true, 00:24:48.510 "copy": false, 00:24:48.510 "nvme_iov_md": false 00:24:48.510 }, 00:24:48.510 "driver_specific": { 00:24:48.510 "lvol": { 00:24:48.510 "lvol_store_uuid": "6c01375a-c7cb-48f7-bbb6-5061bced217a", 00:24:48.510 "base_bdev": "nvme0n1", 00:24:48.510 "thin_provision": true, 00:24:48.510 "num_allocated_clusters": 0, 00:24:48.510 "snapshot": false, 00:24:48.510 "clone": false, 00:24:48.510 "esnap_clone": false 00:24:48.510 } 00:24:48.510 } 00:24:48.510 } 00:24:48.510 ]' 00:24:48.510 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:48.510 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:48.510 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:48.767 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:48.767 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:48.767 20:52:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:48.767 20:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:48.767 20:52:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dc8b055f-03b4-4461-8dfc-9277f7591edd -c nvc0n1p0 --l2p_dram_limit 20 00:24:49.024 [2024-11-26 20:52:43.765354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.765414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.024 [2024-11-26 20:52:43.765432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:49.024 [2024-11-26 20:52:43.765448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.765519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.765535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.024 [2024-11-26 20:52:43.765547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:49.024 [2024-11-26 20:52:43.765560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.765581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.024 [2024-11-26 20:52:43.766756] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.024 [2024-11-26 20:52:43.766791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.766806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.024 [2024-11-26 20:52:43.766817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:24:49.024 [2024-11-26 20:52:43.766830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.766911] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b2e7b0e5-032d-4bd5-8198-7ab0ca43e96f 00:24:49.024 [2024-11-26 20:52:43.768419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.768591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:49.024 [2024-11-26 20:52:43.768645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:49.024 [2024-11-26 20:52:43.768657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.776192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.776224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.024 [2024-11-26 20:52:43.776240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.481 ms 00:24:49.024 [2024-11-26 20:52:43.776255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.776361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.776376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.024 [2024-11-26 20:52:43.776395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:49.024 [2024-11-26 20:52:43.776406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.024 [2024-11-26 20:52:43.776482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.024 [2024-11-26 20:52:43.776494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.025 [2024-11-26 20:52:43.776508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:49.025 [2024-11-26 20:52:43.776519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.025 [2024-11-26 20:52:43.776550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.025 [2024-11-26 20:52:43.781766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.025 [2024-11-26 20:52:43.781802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.025 [2024-11-26 20:52:43.781815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.231 ms 00:24:49.025 [2024-11-26 20:52:43.781831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.025 [2024-11-26 20:52:43.781863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.025 [2024-11-26 20:52:43.781877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.025 [2024-11-26 20:52:43.781888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:49.025 [2024-11-26 20:52:43.781900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.025 [2024-11-26 20:52:43.781933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:49.025 [2024-11-26 20:52:43.782064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.025 [2024-11-26 20:52:43.782079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.025 [2024-11-26 20:52:43.782095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.025 [2024-11-26 20:52:43.782109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782135] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.025 [2024-11-26 20:52:43.782147] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.025 [2024-11-26 20:52:43.782157] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.025 [2024-11-26 20:52:43.782169] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.025 [2024-11-26 20:52:43.782183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.025 [2024-11-26 20:52:43.782195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.025 [2024-11-26 20:52:43.782206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:49.025 [2024-11-26 20:52:43.782220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.025 [2024-11-26 20:52:43.782290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.025 [2024-11-26 20:52:43.782303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.025 [2024-11-26 20:52:43.782314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:49.025 [2024-11-26 20:52:43.782329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.025 [2024-11-26 20:52:43.782409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.025 [2024-11-26 20:52:43.782427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.025 [2024-11-26 20:52:43.782438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.025 [2024-11-26 20:52:43.782472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.025 [2024-11-26 20:52:43.782503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.025 [2024-11-26 20:52:43.782524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.025 [2024-11-26 20:52:43.782548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.025 [2024-11-26 20:52:43.782558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.025 [2024-11-26 20:52:43.782570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.025 [2024-11-26 20:52:43.782579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.025 [2024-11-26 20:52:43.782593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.025 [2024-11-26 20:52:43.782631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.025 [2024-11-26 20:52:43.782680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.025 [2024-11-26 20:52:43.782715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.025 [2024-11-26 20:52:43.782746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.025 [2024-11-26 20:52:43.782793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.025 [2024-11-26 20:52:43.782830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.025 [2024-11-26 20:52:43.782851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.025 [2024-11-26 20:52:43.782863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.025 [2024-11-26 20:52:43.782873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.025 [2024-11-26 20:52:43.782885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.025 [2024-11-26 20:52:43.782894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.025 [2024-11-26 20:52:43.782907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.025 [2024-11-26 20:52:43.782929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.025 [2024-11-26 20:52:43.782938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.782949] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.025 [2024-11-26 20:52:43.782960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.025 [2024-11-26 20:52:43.782974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.025 [2024-11-26 20:52:43.782984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.025 [2024-11-26 20:52:43.783001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.025 [2024-11-26 20:52:43.783011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.025 [2024-11-26 20:52:43.783024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.025 [2024-11-26 20:52:43.783034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.025 [2024-11-26 20:52:43.783046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.025 [2024-11-26 20:52:43.783055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.025 [2024-11-26 20:52:43.783072] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.025 [2024-11-26 20:52:43.783085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.025 [2024-11-26 20:52:43.783100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.025 [2024-11-26 20:52:43.783110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.025 [2024-11-26 20:52:43.783123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.025 [2024-11-26 20:52:43.783134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.025 [2024-11-26 20:52:43.783147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.025 [2024-11-26 20:52:43.783158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.025 [2024-11-26 20:52:43.783171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.025 [2024-11-26 20:52:43.783181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.025 [2024-11-26 20:52:43.783197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.025 [2024-11-26 20:52:43.783208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.025 [2024-11-26 20:52:43.783221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.025 [2024-11-26 20:52:43.783231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.025 [2024-11-26 20:52:43.783245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.025 [2024-11-26 20:52:43.783255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.025 [2024-11-26 20:52:43.783270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.025 [2024-11-26 20:52:43.783282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.026 [2024-11-26 20:52:43.783299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.026 [2024-11-26 20:52:43.783310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.026 [2024-11-26 20:52:43.783324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.026 [2024-11-26 20:52:43.783335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.026 [2024-11-26 20:52:43.783349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.026 [2024-11-26 20:52:43.783359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.026 [2024-11-26 20:52:43.783373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:24:49.026 [2024-11-26 20:52:43.783383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.026 [2024-11-26 20:52:43.783424] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:49.026 [2024-11-26 20:52:43.783437] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:51.550 [2024-11-26 20:52:46.179244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.179311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:51.550 [2024-11-26 20:52:46.179331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2395.806 ms 00:24:51.550 [2024-11-26 20:52:46.179342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.218269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.218323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.550 [2024-11-26 20:52:46.218358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.608 ms 00:24:51.550 [2024-11-26 20:52:46.218369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.218527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.218541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.550 [2024-11-26 20:52:46.218558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:51.550 [2024-11-26 20:52:46.218569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.277677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.277726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.550 [2024-11-26 20:52:46.277744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.063 ms 00:24:51.550 [2024-11-26 20:52:46.277755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.277808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.277819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.550 [2024-11-26 20:52:46.277832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:51.550 [2024-11-26 20:52:46.277846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.278355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.278376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.550 [2024-11-26 20:52:46.278390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:24:51.550 [2024-11-26 20:52:46.278401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.278511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.278524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.550 [2024-11-26 20:52:46.278541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:51.550 [2024-11-26 20:52:46.278551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.297460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.297660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.550 [2024-11-26 20:52:46.297807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:24:51.550 [2024-11-26 20:52:46.297838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.310360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:51.550 [2024-11-26 20:52:46.316240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.316277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.550 [2024-11-26 20:52:46.316293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.300 ms 00:24:51.550 [2024-11-26 20:52:46.316306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.393199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.393267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:51.550 [2024-11-26 20:52:46.393300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.855 ms 00:24:51.550 [2024-11-26 20:52:46.393314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.393500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.393519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.550 [2024-11-26 20:52:46.393531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:24:51.550 [2024-11-26 20:52:46.393547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.431360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.431407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:51.550 [2024-11-26 20:52:46.431422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.762 ms 00:24:51.550 [2024-11-26 20:52:46.431435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.467397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.467436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:51.550 [2024-11-26 20:52:46.467467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.921 ms 00:24:51.550 [2024-11-26 20:52:46.467480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.550 [2024-11-26 20:52:46.468263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.550 [2024-11-26 20:52:46.468303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.550 [2024-11-26 20:52:46.468316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:24:51.550 [2024-11-26 20:52:46.468329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.565682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.565749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:51.809 [2024-11-26 20:52:46.565765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.295 ms 00:24:51.809 [2024-11-26 20:52:46.565779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.602662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.602711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:51.809 [2024-11-26 20:52:46.602729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.798 ms 00:24:51.809 [2024-11-26 20:52:46.602758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.639375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.639422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:51.809 [2024-11-26 20:52:46.639436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.576 ms 00:24:51.809 [2024-11-26 20:52:46.639449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.676477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.676523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.809 [2024-11-26 20:52:46.676538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.985 ms 00:24:51.809 [2024-11-26 20:52:46.676552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.676596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.676625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.809 [2024-11-26 20:52:46.676637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.809 [2024-11-26 20:52:46.676650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.676753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.809 [2024-11-26 20:52:46.676768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.809 [2024-11-26 20:52:46.676779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:51.809 [2024-11-26 20:52:46.676792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.809 [2024-11-26 20:52:46.677846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2911.995 ms, result 0 00:24:51.809 { 00:24:51.809 "name": "ftl0", 00:24:51.809 "uuid": "b2e7b0e5-032d-4bd5-8198-7ab0ca43e96f" 00:24:51.809 } 00:24:51.809 20:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:51.809 20:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:51.809 20:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:52.068 20:52:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:52.326 [2024-11-26 20:52:47.102304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:52.326 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:52.326 Zero copy mechanism will not be used. 00:24:52.326 Running I/O for 4 seconds... 00:24:54.195 1901.00 IOPS, 126.24 MiB/s [2024-11-26T20:52:50.124Z] 1961.00 IOPS, 130.22 MiB/s [2024-11-26T20:52:51.114Z] 1983.67 IOPS, 131.73 MiB/s [2024-11-26T20:52:51.372Z] 1985.50 IOPS, 131.85 MiB/s 00:24:56.378 Latency(us) 00:24:56.378 [2024-11-26T20:52:51.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.378 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:56.378 ftl0 : 4.00 1984.99 131.82 0.00 0.00 527.70 197.00 10298.51 00:24:56.378 [2024-11-26T20:52:51.372Z] =================================================================================================================== 00:24:56.378 [2024-11-26T20:52:51.372Z] Total : 1984.99 131.82 0.00 0.00 527.70 197.00 10298.51 00:24:56.378 [2024-11-26 20:52:51.114050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:56.378 { 00:24:56.378 "results": [ 00:24:56.378 { 00:24:56.378 "job": "ftl0", 00:24:56.378 "core_mask": "0x1", 00:24:56.378 "workload": "randwrite", 00:24:56.378 "status": "finished", 00:24:56.378 "queue_depth": 1, 00:24:56.378 "io_size": 69632, 00:24:56.378 "runtime": 4.001539, 00:24:56.378 "iops": 1984.986276530105, 00:24:56.378 "mibps": 131.8154949258273, 00:24:56.378 "io_failed": 0, 00:24:56.378 "io_timeout": 0, 00:24:56.378 "avg_latency_us": 527.6958941985456, 00:24:56.378 "min_latency_us": 196.99809523809523, 00:24:56.378 "max_latency_us": 10298.514285714286 00:24:56.378 } 00:24:56.378 ], 00:24:56.378 "core_count": 1 00:24:56.378 } 00:24:56.378 20:52:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:56.378 [2024-11-26 20:52:51.259424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:56.378 Running I/O for 4 seconds... 00:24:58.688 10130.00 IOPS, 39.57 MiB/s [2024-11-26T20:52:54.615Z] 10103.00 IOPS, 39.46 MiB/s [2024-11-26T20:52:55.550Z] 9186.67 IOPS, 35.89 MiB/s [2024-11-26T20:52:55.550Z] 8984.50 IOPS, 35.10 MiB/s 00:25:00.556 Latency(us) 00:25:00.556 [2024-11-26T20:52:55.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.556 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:25:00.556 ftl0 : 4.02 8979.06 35.07 0.00 0.00 14224.54 273.07 35701.52 00:25:00.556 [2024-11-26T20:52:55.550Z] =================================================================================================================== 00:25:00.556 [2024-11-26T20:52:55.550Z] Total : 8979.06 35.07 0.00 0.00 14224.54 0.00 35701.52 00:25:00.556 { 00:25:00.556 "results": [ 00:25:00.556 { 00:25:00.556 "job": "ftl0", 00:25:00.556 "core_mask": "0x1", 00:25:00.556 "workload": "randwrite", 00:25:00.556 "status": "finished", 00:25:00.556 "queue_depth": 128, 00:25:00.556 "io_size": 4096, 00:25:00.556 "runtime": 4.016456, 00:25:00.556 "iops": 8979.060146557063, 00:25:00.556 "mibps": 35.07445369748853, 00:25:00.556 "io_failed": 0, 00:25:00.556 "io_timeout": 0, 00:25:00.556 "avg_latency_us": 14224.541865044155, 00:25:00.556 "min_latency_us": 273.06666666666666, 00:25:00.556 "max_latency_us": 35701.51619047619 00:25:00.556 } 00:25:00.556 ], 00:25:00.556 "core_count": 1 00:25:00.556 } 00:25:00.556 [2024-11-26 20:52:55.286336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:00.556 20:52:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:25:00.556 [2024-11-26 20:52:55.439053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:00.556 Running I/O for 4 seconds... 00:25:02.865 7778.00 IOPS, 30.38 MiB/s [2024-11-26T20:52:58.790Z] 7888.00 IOPS, 30.81 MiB/s [2024-11-26T20:52:59.722Z] 7970.67 IOPS, 31.14 MiB/s [2024-11-26T20:52:59.722Z] 8010.25 IOPS, 31.29 MiB/s 00:25:04.728 Latency(us) 00:25:04.729 [2024-11-26T20:52:59.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.729 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:04.729 Verification LBA range: start 0x0 length 0x1400000 00:25:04.729 ftl0 : 4.01 8021.99 31.34 0.00 0.00 15905.97 284.77 19473.55 00:25:04.729 [2024-11-26T20:52:59.723Z] =================================================================================================================== 00:25:04.729 [2024-11-26T20:52:59.723Z] Total : 8021.99 31.34 0.00 0.00 15905.97 0.00 19473.55 00:25:04.729 [2024-11-26 20:52:59.469823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:04.729 { 00:25:04.729 "results": [ 00:25:04.729 { 00:25:04.729 "job": "ftl0", 00:25:04.729 "core_mask": "0x1", 00:25:04.729 "workload": "verify", 00:25:04.729 "status": "finished", 00:25:04.729 "verify_range": { 00:25:04.729 "start": 0, 00:25:04.729 "length": 20971520 00:25:04.729 }, 00:25:04.729 "queue_depth": 128, 00:25:04.729 "io_size": 4096, 00:25:04.729 "runtime": 4.010102, 00:25:04.729 "iops": 8021.990463085478, 00:25:04.729 "mibps": 31.33590024642765, 00:25:04.729 "io_failed": 0, 00:25:04.729 "io_timeout": 0, 00:25:04.729 "avg_latency_us": 15905.9736585207, 00:25:04.729 "min_latency_us": 284.76952380952383, 00:25:04.729 "max_latency_us": 19473.554285714286 00:25:04.729 } 00:25:04.729 ], 00:25:04.729 "core_count": 1 00:25:04.729 } 00:25:04.729 20:52:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:04.986 [2024-11-26 20:52:59.769734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.986 [2024-11-26 20:52:59.769969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:04.986 [2024-11-26 20:52:59.770012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:04.987 [2024-11-26 20:52:59.770027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.987 [2024-11-26 20:52:59.770069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.987 [2024-11-26 20:52:59.774641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.987 [2024-11-26 20:52:59.774672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:04.987 [2024-11-26 20:52:59.774688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.548 ms 00:25:04.987 [2024-11-26 20:52:59.774699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.987 [2024-11-26 20:52:59.776652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.987 [2024-11-26 20:52:59.776811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:04.987 [2024-11-26 20:52:59.776848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.923 ms 00:25:04.987 [2024-11-26 20:52:59.776861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.987 [2024-11-26 20:52:59.944424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.987 [2024-11-26 20:52:59.944502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:04.987 [2024-11-26 20:52:59.944529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.516 ms 00:25:04.987 [2024-11-26 20:52:59.944543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.987 [2024-11-26 20:52:59.949993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.987 [2024-11-26 20:52:59.950027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:04.987 [2024-11-26 20:52:59.950043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:25:04.987 [2024-11-26 20:52:59.950057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:52:59.988291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:52:59.988333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:05.247 [2024-11-26 20:52:59.988351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.161 ms 00:25:05.247 [2024-11-26 20:52:59.988362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.012020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.012076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:05.247 [2024-11-26 20:53:00.012096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.611 ms 00:25:05.247 [2024-11-26 20:53:00.012108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.012297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.012313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:05.247 [2024-11-26 20:53:00.012331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:05.247 [2024-11-26 20:53:00.012343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.051401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.051449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:05.247 [2024-11-26 20:53:00.051468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.033 ms 00:25:05.247 [2024-11-26 20:53:00.051479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.089397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.089442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:05.247 [2024-11-26 20:53:00.089460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.846 ms 00:25:05.247 [2024-11-26 20:53:00.089471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.125400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.125437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:05.247 [2024-11-26 20:53:00.125453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.881 ms 00:25:05.247 [2024-11-26 20:53:00.125478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.162878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.247 [2024-11-26 20:53:00.162919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:05.247 [2024-11-26 20:53:00.162940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.280 ms 00:25:05.247 [2024-11-26 20:53:00.162950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.247 [2024-11-26 20:53:00.162993] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:05.247 [2024-11-26 20:53:00.163010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.163990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:05.247 [2024-11-26 20:53:00.164237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:05.248 [2024-11-26 20:53:00.164350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:05.248 [2024-11-26 20:53:00.164363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b2e7b0e5-032d-4bd5-8198-7ab0ca43e96f 00:25:05.248 [2024-11-26 20:53:00.164378] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:05.248 [2024-11-26 20:53:00.164391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:05.248 [2024-11-26 20:53:00.164401] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:05.248 [2024-11-26 20:53:00.164414] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:05.248 [2024-11-26 20:53:00.164427] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:05.248 [2024-11-26 20:53:00.164440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:05.248 [2024-11-26 20:53:00.164450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:05.248 [2024-11-26 20:53:00.164465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:05.248 [2024-11-26 20:53:00.164474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:05.248 [2024-11-26 20:53:00.164487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.248 [2024-11-26 20:53:00.164498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:05.248 [2024-11-26 20:53:00.164511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:25:05.248 [2024-11-26 20:53:00.164522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.248 [2024-11-26 20:53:00.185095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.248 [2024-11-26 20:53:00.185269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:05.248 [2024-11-26 20:53:00.185297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.514 ms 00:25:05.248 [2024-11-26 20:53:00.185308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.248 [2024-11-26 20:53:00.185889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.248 [2024-11-26 20:53:00.185906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:05.248 [2024-11-26 20:53:00.185921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:05.248 [2024-11-26 20:53:00.185931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.243311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.243358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:05.507 [2024-11-26 20:53:00.243379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.243391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.243460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.243471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:05.507 [2024-11-26 20:53:00.243485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.243496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.243643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.243660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:05.507 [2024-11-26 20:53:00.243674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.243684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.243724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.243736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:05.507 [2024-11-26 20:53:00.243750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.243761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.374037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.374102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:05.507 [2024-11-26 20:53:00.374125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.374136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:05.507 [2024-11-26 20:53:00.480160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:05.507 [2024-11-26 20:53:00.480337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:05.507 [2024-11-26 20:53:00.480456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:05.507 [2024-11-26 20:53:00.480663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:05.507 [2024-11-26 20:53:00.480749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:05.507 [2024-11-26 20:53:00.480834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.480907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:05.507 [2024-11-26 20:53:00.480920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:05.507 [2024-11-26 20:53:00.480935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:05.507 [2024-11-26 20:53:00.480945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.507 [2024-11-26 20:53:00.481095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 711.307 ms, result 0 00:25:05.507 true 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78488 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78488 ']' 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78488 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78488 00:25:05.769 killing process with pid 78488 00:25:05.769 Received shutdown signal, test time was about 4.000000 seconds 00:25:05.769 00:25:05.769 Latency(us) 00:25:05.769 [2024-11-26T20:53:00.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.769 [2024-11-26T20:53:00.763Z] =================================================================================================================== 00:25:05.769 [2024-11-26T20:53:00.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78488' 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78488 00:25:05.769 20:53:00 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78488 00:25:07.171 Remove shared memory files 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:07.171 ************************************ 00:25:07.171 END TEST ftl_bdevperf 00:25:07.171 ************************************ 00:25:07.171 00:25:07.171 real 0m22.605s 00:25:07.171 user 0m25.752s 00:25:07.171 sys 0m1.317s 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.171 20:53:01 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:07.171 20:53:01 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:07.171 20:53:01 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:07.171 20:53:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.171 20:53:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:07.171 ************************************ 00:25:07.171 START TEST ftl_trim 00:25:07.171 ************************************ 00:25:07.171 20:53:01 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:07.171 * Looking for test storage... 00:25:07.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:07.171 20:53:01 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.171 20:53:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.171 20:53:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.171 20:53:01 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.171 20:53:01 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:07.172 20:53:01 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.172 20:53:02 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.172 --rc genhtml_branch_coverage=1 00:25:07.172 --rc genhtml_function_coverage=1 00:25:07.172 --rc genhtml_legend=1 00:25:07.172 --rc geninfo_all_blocks=1 00:25:07.172 --rc geninfo_unexecuted_blocks=1 00:25:07.172 00:25:07.172 ' 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.172 --rc genhtml_branch_coverage=1 00:25:07.172 --rc genhtml_function_coverage=1 00:25:07.172 --rc genhtml_legend=1 00:25:07.172 --rc geninfo_all_blocks=1 00:25:07.172 --rc geninfo_unexecuted_blocks=1 00:25:07.172 00:25:07.172 ' 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.172 --rc genhtml_branch_coverage=1 00:25:07.172 --rc genhtml_function_coverage=1 00:25:07.172 --rc genhtml_legend=1 00:25:07.172 --rc geninfo_all_blocks=1 00:25:07.172 --rc geninfo_unexecuted_blocks=1 00:25:07.172 00:25:07.172 ' 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.172 --rc genhtml_branch_coverage=1 00:25:07.172 --rc genhtml_function_coverage=1 00:25:07.172 --rc genhtml_legend=1 00:25:07.172 --rc geninfo_all_blocks=1 00:25:07.172 --rc geninfo_unexecuted_blocks=1 00:25:07.172 00:25:07.172 ' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:07.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78836 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78836 00:25:07.172 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78836 ']' 00:25:07.172 20:53:02 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:07.173 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.173 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.173 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.173 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.173 20:53:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:07.431 [2024-11-26 20:53:02.185963] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:07.432 [2024-11-26 20:53:02.186353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78836 ] 00:25:07.432 [2024-11-26 20:53:02.376589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:07.690 [2024-11-26 20:53:02.500035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.690 [2024-11-26 20:53:02.500127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.690 [2024-11-26 20:53:02.500159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.627 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.627 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:08.627 20:53:03 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:08.886 20:53:03 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:08.886 20:53:03 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:08.886 20:53:03 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:08.886 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:08.886 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:08.886 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:08.886 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:08.886 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:09.145 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:09.145 { 00:25:09.145 "name": "nvme0n1", 00:25:09.145 "aliases": [ 00:25:09.145 "ec56522a-e330-46f2-9633-9344e7f75fc3" 00:25:09.145 ], 00:25:09.145 "product_name": "NVMe disk", 00:25:09.145 "block_size": 4096, 00:25:09.145 "num_blocks": 1310720, 00:25:09.145 "uuid": "ec56522a-e330-46f2-9633-9344e7f75fc3", 00:25:09.145 "numa_id": -1, 00:25:09.145 "assigned_rate_limits": { 00:25:09.145 "rw_ios_per_sec": 0, 00:25:09.145 "rw_mbytes_per_sec": 0, 00:25:09.145 "r_mbytes_per_sec": 0, 00:25:09.145 "w_mbytes_per_sec": 0 00:25:09.145 }, 00:25:09.145 "claimed": true, 00:25:09.145 "claim_type": "read_many_write_one", 00:25:09.145 "zoned": false, 00:25:09.145 "supported_io_types": { 00:25:09.145 "read": true, 00:25:09.145 "write": true, 00:25:09.145 "unmap": true, 00:25:09.145 "flush": true, 00:25:09.145 "reset": true, 00:25:09.145 "nvme_admin": true, 00:25:09.145 "nvme_io": true, 00:25:09.145 "nvme_io_md": false, 00:25:09.145 "write_zeroes": true, 00:25:09.145 "zcopy": false, 00:25:09.145 "get_zone_info": false, 00:25:09.146 "zone_management": false, 00:25:09.146 "zone_append": false, 00:25:09.146 "compare": true, 00:25:09.146 "compare_and_write": false, 00:25:09.146 "abort": true, 00:25:09.146 "seek_hole": false, 00:25:09.146 "seek_data": false, 00:25:09.146 "copy": true, 00:25:09.146 "nvme_iov_md": false 00:25:09.146 }, 00:25:09.146 "driver_specific": { 00:25:09.146 "nvme": [ 00:25:09.146 { 00:25:09.146 "pci_address": "0000:00:11.0", 00:25:09.146 "trid": { 00:25:09.146 "trtype": "PCIe", 00:25:09.146 "traddr": "0000:00:11.0" 00:25:09.146 }, 00:25:09.146 "ctrlr_data": { 00:25:09.146 "cntlid": 0, 00:25:09.146 "vendor_id": "0x1b36", 00:25:09.146 "model_number": "QEMU NVMe Ctrl", 00:25:09.146 "serial_number": "12341", 00:25:09.146 "firmware_revision": "8.0.0", 00:25:09.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:09.146 "oacs": { 00:25:09.146 "security": 0, 00:25:09.146 "format": 1, 00:25:09.146 "firmware": 0, 00:25:09.146 "ns_manage": 1 00:25:09.146 }, 00:25:09.146 "multi_ctrlr": false, 00:25:09.146 "ana_reporting": false 00:25:09.146 }, 00:25:09.146 "vs": { 00:25:09.146 "nvme_version": "1.4" 00:25:09.146 }, 00:25:09.146 "ns_data": { 00:25:09.146 "id": 1, 00:25:09.146 "can_share": false 00:25:09.146 } 00:25:09.146 } 00:25:09.146 ], 00:25:09.146 "mp_policy": "active_passive" 00:25:09.146 } 00:25:09.146 } 00:25:09.146 ]' 00:25:09.146 20:53:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:09.146 20:53:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:09.146 20:53:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:09.146 20:53:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:09.146 20:53:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:09.146 20:53:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:09.146 20:53:04 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:09.146 20:53:04 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:09.146 20:53:04 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:09.146 20:53:04 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:09.146 20:53:04 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:09.404 20:53:04 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=6c01375a-c7cb-48f7-bbb6-5061bced217a 00:25:09.404 20:53:04 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:09.404 20:53:04 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c01375a-c7cb-48f7-bbb6-5061bced217a 00:25:09.663 20:53:04 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:09.932 20:53:04 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b06cebfa-4642-4455-80e8-277a4d55ad2f 00:25:09.932 20:53:04 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b06cebfa-4642-4455-80e8-277a4d55ad2f 00:25:10.190 20:53:05 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.190 20:53:05 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.190 20:53:05 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:10.191 20:53:05 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:10.191 20:53:05 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.191 20:53:05 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:10.191 20:53:05 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.191 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.191 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:10.191 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:10.191 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:10.191 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:10.449 { 00:25:10.449 "name": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:10.449 "aliases": [ 00:25:10.449 "lvs/nvme0n1p0" 00:25:10.449 ], 00:25:10.449 "product_name": "Logical Volume", 00:25:10.449 "block_size": 4096, 00:25:10.449 "num_blocks": 26476544, 00:25:10.449 "uuid": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:10.449 "assigned_rate_limits": { 00:25:10.449 "rw_ios_per_sec": 0, 00:25:10.449 "rw_mbytes_per_sec": 0, 00:25:10.449 "r_mbytes_per_sec": 0, 00:25:10.449 "w_mbytes_per_sec": 0 00:25:10.449 }, 00:25:10.449 "claimed": false, 00:25:10.449 "zoned": false, 00:25:10.449 "supported_io_types": { 00:25:10.449 "read": true, 00:25:10.449 "write": true, 00:25:10.449 "unmap": true, 00:25:10.449 "flush": false, 00:25:10.449 "reset": true, 00:25:10.449 "nvme_admin": false, 00:25:10.449 "nvme_io": false, 00:25:10.449 "nvme_io_md": false, 00:25:10.449 "write_zeroes": true, 00:25:10.449 "zcopy": false, 00:25:10.449 "get_zone_info": false, 00:25:10.449 "zone_management": false, 00:25:10.449 "zone_append": false, 00:25:10.449 "compare": false, 00:25:10.449 "compare_and_write": false, 00:25:10.449 "abort": false, 00:25:10.449 "seek_hole": true, 00:25:10.449 "seek_data": true, 00:25:10.449 "copy": false, 00:25:10.449 "nvme_iov_md": false 00:25:10.449 }, 00:25:10.449 "driver_specific": { 00:25:10.449 "lvol": { 00:25:10.449 "lvol_store_uuid": "b06cebfa-4642-4455-80e8-277a4d55ad2f", 00:25:10.449 "base_bdev": "nvme0n1", 00:25:10.449 "thin_provision": true, 00:25:10.449 "num_allocated_clusters": 0, 00:25:10.449 "snapshot": false, 00:25:10.449 "clone": false, 00:25:10.449 "esnap_clone": false 00:25:10.449 } 00:25:10.449 } 00:25:10.449 } 00:25:10.449 ]' 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:10.449 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:10.449 20:53:05 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:10.449 20:53:05 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:10.449 20:53:05 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:10.707 20:53:05 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:10.707 20:53:05 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:10.707 20:53:05 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.707 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.707 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:10.707 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:10.707 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:10.707 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:10.965 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:10.965 { 00:25:10.965 "name": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:10.965 "aliases": [ 00:25:10.965 "lvs/nvme0n1p0" 00:25:10.965 ], 00:25:10.965 "product_name": "Logical Volume", 00:25:10.965 "block_size": 4096, 00:25:10.965 "num_blocks": 26476544, 00:25:10.965 "uuid": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:10.965 "assigned_rate_limits": { 00:25:10.965 "rw_ios_per_sec": 0, 00:25:10.965 "rw_mbytes_per_sec": 0, 00:25:10.965 "r_mbytes_per_sec": 0, 00:25:10.965 "w_mbytes_per_sec": 0 00:25:10.965 }, 00:25:10.965 "claimed": false, 00:25:10.965 "zoned": false, 00:25:10.965 "supported_io_types": { 00:25:10.965 "read": true, 00:25:10.965 "write": true, 00:25:10.965 "unmap": true, 00:25:10.965 "flush": false, 00:25:10.965 "reset": true, 00:25:10.965 "nvme_admin": false, 00:25:10.965 "nvme_io": false, 00:25:10.965 "nvme_io_md": false, 00:25:10.965 "write_zeroes": true, 00:25:10.965 "zcopy": false, 00:25:10.965 "get_zone_info": false, 00:25:10.965 "zone_management": false, 00:25:10.965 "zone_append": false, 00:25:10.965 "compare": false, 00:25:10.965 "compare_and_write": false, 00:25:10.965 "abort": false, 00:25:10.965 "seek_hole": true, 00:25:10.965 "seek_data": true, 00:25:10.965 "copy": false, 00:25:10.965 "nvme_iov_md": false 00:25:10.966 }, 00:25:10.966 "driver_specific": { 00:25:10.966 "lvol": { 00:25:10.966 "lvol_store_uuid": "b06cebfa-4642-4455-80e8-277a4d55ad2f", 00:25:10.966 "base_bdev": "nvme0n1", 00:25:10.966 "thin_provision": true, 00:25:10.966 "num_allocated_clusters": 0, 00:25:10.966 "snapshot": false, 00:25:10.966 "clone": false, 00:25:10.966 "esnap_clone": false 00:25:10.966 } 00:25:10.966 } 00:25:10.966 } 00:25:10.966 ]' 00:25:10.966 20:53:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:11.223 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:11.223 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:11.223 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:11.223 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:11.223 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:11.223 20:53:06 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:11.223 20:53:06 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:11.483 20:53:06 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:11.483 20:53:06 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:11.483 20:53:06 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:11.483 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:11.483 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:11.483 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:11.483 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:11.483 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a8e31a0-8f89-4a64-b95d-cf41ead24703 00:25:11.741 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:11.741 { 00:25:11.741 "name": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:11.741 "aliases": [ 00:25:11.741 "lvs/nvme0n1p0" 00:25:11.741 ], 00:25:11.741 "product_name": "Logical Volume", 00:25:11.741 "block_size": 4096, 00:25:11.741 "num_blocks": 26476544, 00:25:11.741 "uuid": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:11.741 "assigned_rate_limits": { 00:25:11.741 "rw_ios_per_sec": 0, 00:25:11.741 "rw_mbytes_per_sec": 0, 00:25:11.741 "r_mbytes_per_sec": 0, 00:25:11.741 "w_mbytes_per_sec": 0 00:25:11.741 }, 00:25:11.741 "claimed": false, 00:25:11.741 "zoned": false, 00:25:11.742 "supported_io_types": { 00:25:11.742 "read": true, 00:25:11.742 "write": true, 00:25:11.742 "unmap": true, 00:25:11.742 "flush": false, 00:25:11.742 "reset": true, 00:25:11.742 "nvme_admin": false, 00:25:11.742 "nvme_io": false, 00:25:11.742 "nvme_io_md": false, 00:25:11.742 "write_zeroes": true, 00:25:11.742 "zcopy": false, 00:25:11.742 "get_zone_info": false, 00:25:11.742 "zone_management": false, 00:25:11.742 "zone_append": false, 00:25:11.742 "compare": false, 00:25:11.742 "compare_and_write": false, 00:25:11.742 "abort": false, 00:25:11.742 "seek_hole": true, 00:25:11.742 "seek_data": true, 00:25:11.742 "copy": false, 00:25:11.742 "nvme_iov_md": false 00:25:11.742 }, 00:25:11.742 "driver_specific": { 00:25:11.742 "lvol": { 00:25:11.742 "lvol_store_uuid": "b06cebfa-4642-4455-80e8-277a4d55ad2f", 00:25:11.742 "base_bdev": "nvme0n1", 00:25:11.742 "thin_provision": true, 00:25:11.742 "num_allocated_clusters": 0, 00:25:11.742 "snapshot": false, 00:25:11.742 "clone": false, 00:25:11.742 "esnap_clone": false 00:25:11.742 } 00:25:11.742 } 00:25:11.742 } 00:25:11.742 ]' 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:11.742 20:53:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:11.742 20:53:06 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:11.742 20:53:06 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7a8e31a0-8f89-4a64-b95d-cf41ead24703 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:12.005 [2024-11-26 20:53:06.844045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.844098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:12.005 [2024-11-26 20:53:06.844119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:12.005 [2024-11-26 20:53:06.844130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.847506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.847714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.005 [2024-11-26 20:53:06.847743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:25:12.005 [2024-11-26 20:53:06.847756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.847943] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:12.005 [2024-11-26 20:53:06.849038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:12.005 [2024-11-26 20:53:06.849078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.849092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.005 [2024-11-26 20:53:06.849107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:25:12.005 [2024-11-26 20:53:06.849118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.849239] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:12.005 [2024-11-26 20:53:06.850739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.850898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:12.005 [2024-11-26 20:53:06.850920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:12.005 [2024-11-26 20:53:06.850934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.858476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.858638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.005 [2024-11-26 20:53:06.858660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.432 ms 00:25:12.005 [2024-11-26 20:53:06.858677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.858837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.858855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.005 [2024-11-26 20:53:06.858868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:12.005 [2024-11-26 20:53:06.858885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.858924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.858938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:12.005 [2024-11-26 20:53:06.858949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:12.005 [2024-11-26 20:53:06.858965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.859001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:12.005 [2024-11-26 20:53:06.863776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.863806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.005 [2024-11-26 20:53:06.863822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:25:12.005 [2024-11-26 20:53:06.863833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.863907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.863936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:12.005 [2024-11-26 20:53:06.863951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:12.005 [2024-11-26 20:53:06.863961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.863995] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:12.005 [2024-11-26 20:53:06.864126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:12.005 [2024-11-26 20:53:06.864145] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:12.005 [2024-11-26 20:53:06.864159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:12.005 [2024-11-26 20:53:06.864175] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864187] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864201] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:12.005 [2024-11-26 20:53:06.864211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:12.005 [2024-11-26 20:53:06.864227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:12.005 [2024-11-26 20:53:06.864238] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:12.005 [2024-11-26 20:53:06.864251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.864262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:12.005 [2024-11-26 20:53:06.864275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:25:12.005 [2024-11-26 20:53:06.864285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.864376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.005 [2024-11-26 20:53:06.864387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:12.005 [2024-11-26 20:53:06.864401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:12.005 [2024-11-26 20:53:06.864411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.005 [2024-11-26 20:53:06.864533] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:12.005 [2024-11-26 20:53:06.864545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:12.005 [2024-11-26 20:53:06.864558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:12.005 [2024-11-26 20:53:06.864592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:12.005 [2024-11-26 20:53:06.864650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:12.005 [2024-11-26 20:53:06.864671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:12.005 [2024-11-26 20:53:06.864681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:12.005 [2024-11-26 20:53:06.864694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:12.005 [2024-11-26 20:53:06.864704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:12.005 [2024-11-26 20:53:06.864717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:12.005 [2024-11-26 20:53:06.864727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:12.005 [2024-11-26 20:53:06.864751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:12.005 [2024-11-26 20:53:06.864790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:12.005 [2024-11-26 20:53:06.864821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:12.005 [2024-11-26 20:53:06.864854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:12.005 [2024-11-26 20:53:06.864884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:12.005 [2024-11-26 20:53:06.864905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:12.005 [2024-11-26 20:53:06.864919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:12.005 [2024-11-26 20:53:06.864928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:12.005 [2024-11-26 20:53:06.864939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:12.005 [2024-11-26 20:53:06.864948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:12.005 [2024-11-26 20:53:06.864959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:12.005 [2024-11-26 20:53:06.864969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:12.005 [2024-11-26 20:53:06.864980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:12.005 [2024-11-26 20:53:06.864990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.865001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:12.005 [2024-11-26 20:53:06.865010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:12.005 [2024-11-26 20:53:06.865022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.005 [2024-11-26 20:53:06.865031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:12.005 [2024-11-26 20:53:06.865043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:12.005 [2024-11-26 20:53:06.865053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:12.005 [2024-11-26 20:53:06.865066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:12.006 [2024-11-26 20:53:06.865077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:12.006 [2024-11-26 20:53:06.865093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:12.006 [2024-11-26 20:53:06.865102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:12.006 [2024-11-26 20:53:06.865114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:12.006 [2024-11-26 20:53:06.865124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:12.006 [2024-11-26 20:53:06.865135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:12.006 [2024-11-26 20:53:06.865149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:12.006 [2024-11-26 20:53:06.865169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:12.006 [2024-11-26 20:53:06.865200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:12.006 [2024-11-26 20:53:06.865210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:12.006 [2024-11-26 20:53:06.865222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:12.006 [2024-11-26 20:53:06.865240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:12.006 [2024-11-26 20:53:06.865253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:12.006 [2024-11-26 20:53:06.865263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:12.006 [2024-11-26 20:53:06.865276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:12.006 [2024-11-26 20:53:06.865287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:12.006 [2024-11-26 20:53:06.865302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:12.006 [2024-11-26 20:53:06.865360] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:12.006 [2024-11-26 20:53:06.865375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:12.006 [2024-11-26 20:53:06.865399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:12.006 [2024-11-26 20:53:06.865410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:12.006 [2024-11-26 20:53:06.865423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:12.006 [2024-11-26 20:53:06.865434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.006 [2024-11-26 20:53:06.865448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:12.006 [2024-11-26 20:53:06.865458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:25:12.006 [2024-11-26 20:53:06.865471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.006 [2024-11-26 20:53:06.865573] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:12.006 [2024-11-26 20:53:06.865598] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:15.288 [2024-11-26 20:53:09.588562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.588813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:15.288 [2024-11-26 20:53:09.588905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2722.973 ms 00:25:15.288 [2024-11-26 20:53:09.588948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.628953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.629200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.288 [2024-11-26 20:53:09.629312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.619 ms 00:25:15.288 [2024-11-26 20:53:09.629357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.629633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.629764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:15.288 [2024-11-26 20:53:09.629865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:15.288 [2024-11-26 20:53:09.629913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.688866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.689093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.288 [2024-11-26 20:53:09.689116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.840 ms 00:25:15.288 [2024-11-26 20:53:09.689132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.689244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.689261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.288 [2024-11-26 20:53:09.689272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:15.288 [2024-11-26 20:53:09.689285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.689762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.689779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.288 [2024-11-26 20:53:09.689791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:15.288 [2024-11-26 20:53:09.689803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.689920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.689934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.288 [2024-11-26 20:53:09.689961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:25:15.288 [2024-11-26 20:53:09.689977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.711317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.711365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.288 [2024-11-26 20:53:09.711380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.303 ms 00:25:15.288 [2024-11-26 20:53:09.711399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.724208] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:15.288 [2024-11-26 20:53:09.741150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.741209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:15.288 [2024-11-26 20:53:09.741229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.592 ms 00:25:15.288 [2024-11-26 20:53:09.741241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.838278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.838346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:15.288 [2024-11-26 20:53:09.838365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.885 ms 00:25:15.288 [2024-11-26 20:53:09.838376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.838603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.838633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:15.288 [2024-11-26 20:53:09.838651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:15.288 [2024-11-26 20:53:09.838662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.876534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.876578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:15.288 [2024-11-26 20:53:09.876597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.831 ms 00:25:15.288 [2024-11-26 20:53:09.876610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.913549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.913601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:15.288 [2024-11-26 20:53:09.913651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.844 ms 00:25:15.288 [2024-11-26 20:53:09.913662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:09.914420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:09.914450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:15.288 [2024-11-26 20:53:09.914561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:25:15.288 [2024-11-26 20:53:09.914572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.027449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.027513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:15.288 [2024-11-26 20:53:10.027536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.831 ms 00:25:15.288 [2024-11-26 20:53:10.027547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.068474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.068533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:15.288 [2024-11-26 20:53:10.068553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.728 ms 00:25:15.288 [2024-11-26 20:53:10.068564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.109746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.109801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:15.288 [2024-11-26 20:53:10.109837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.065 ms 00:25:15.288 [2024-11-26 20:53:10.109848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.146980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.147166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:15.288 [2024-11-26 20:53:10.147193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.027 ms 00:25:15.288 [2024-11-26 20:53:10.147204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.147334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.147348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:15.288 [2024-11-26 20:53:10.147366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:15.288 [2024-11-26 20:53:10.147377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.147462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.288 [2024-11-26 20:53:10.147473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:15.288 [2024-11-26 20:53:10.147487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:15.288 [2024-11-26 20:53:10.147497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.288 [2024-11-26 20:53:10.148636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:15.288 [2024-11-26 20:53:10.153135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3304.200 ms, result 0 00:25:15.288 [2024-11-26 20:53:10.154098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:25:15.289 "name": "ftl0", 00:25:15.289 "uuid": "532d7671-2fbd-470b-89fe-c9097b3f6a68" 00:25:15.289 } 00:25:15.289 p_thread 00:25:15.289 20:53:10 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:15.289 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:15.547 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:15.806 [ 00:25:15.806 { 00:25:15.806 "name": "ftl0", 00:25:15.806 "aliases": [ 00:25:15.806 "532d7671-2fbd-470b-89fe-c9097b3f6a68" 00:25:15.806 ], 00:25:15.806 "product_name": "FTL disk", 00:25:15.806 "block_size": 4096, 00:25:15.806 "num_blocks": 23592960, 00:25:15.806 "uuid": "532d7671-2fbd-470b-89fe-c9097b3f6a68", 00:25:15.806 "assigned_rate_limits": { 00:25:15.806 "rw_ios_per_sec": 0, 00:25:15.806 "rw_mbytes_per_sec": 0, 00:25:15.806 "r_mbytes_per_sec": 0, 00:25:15.806 "w_mbytes_per_sec": 0 00:25:15.806 }, 00:25:15.806 "claimed": false, 00:25:15.806 "zoned": false, 00:25:15.806 "supported_io_types": { 00:25:15.806 "read": true, 00:25:15.806 "write": true, 00:25:15.806 "unmap": true, 00:25:15.806 "flush": true, 00:25:15.806 "reset": false, 00:25:15.806 "nvme_admin": false, 00:25:15.806 "nvme_io": false, 00:25:15.806 "nvme_io_md": false, 00:25:15.806 "write_zeroes": true, 00:25:15.806 "zcopy": false, 00:25:15.806 "get_zone_info": false, 00:25:15.806 "zone_management": false, 00:25:15.806 "zone_append": false, 00:25:15.806 "compare": false, 00:25:15.806 "compare_and_write": false, 00:25:15.806 "abort": false, 00:25:15.806 "seek_hole": false, 00:25:15.806 "seek_data": false, 00:25:15.806 "copy": false, 00:25:15.806 "nvme_iov_md": false 00:25:15.806 }, 00:25:15.806 "driver_specific": { 00:25:15.806 "ftl": { 00:25:15.806 "base_bdev": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:15.806 "cache": "nvc0n1p0" 00:25:15.806 } 00:25:15.806 } 00:25:15.806 } 00:25:15.806 ] 00:25:15.806 20:53:10 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:15.806 20:53:10 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:15.806 20:53:10 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:16.066 20:53:10 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:16.066 20:53:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:16.325 20:53:11 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:16.325 { 00:25:16.325 "name": "ftl0", 00:25:16.325 "aliases": [ 00:25:16.325 "532d7671-2fbd-470b-89fe-c9097b3f6a68" 00:25:16.325 ], 00:25:16.325 "product_name": "FTL disk", 00:25:16.325 "block_size": 4096, 00:25:16.325 "num_blocks": 23592960, 00:25:16.325 "uuid": "532d7671-2fbd-470b-89fe-c9097b3f6a68", 00:25:16.325 "assigned_rate_limits": { 00:25:16.325 "rw_ios_per_sec": 0, 00:25:16.325 "rw_mbytes_per_sec": 0, 00:25:16.325 "r_mbytes_per_sec": 0, 00:25:16.325 "w_mbytes_per_sec": 0 00:25:16.325 }, 00:25:16.325 "claimed": false, 00:25:16.326 "zoned": false, 00:25:16.326 "supported_io_types": { 00:25:16.326 "read": true, 00:25:16.326 "write": true, 00:25:16.326 "unmap": true, 00:25:16.326 "flush": true, 00:25:16.326 "reset": false, 00:25:16.326 "nvme_admin": false, 00:25:16.326 "nvme_io": false, 00:25:16.326 "nvme_io_md": false, 00:25:16.326 "write_zeroes": true, 00:25:16.326 "zcopy": false, 00:25:16.326 "get_zone_info": false, 00:25:16.326 "zone_management": false, 00:25:16.326 "zone_append": false, 00:25:16.326 "compare": false, 00:25:16.326 "compare_and_write": false, 00:25:16.326 "abort": false, 00:25:16.326 "seek_hole": false, 00:25:16.326 "seek_data": false, 00:25:16.326 "copy": false, 00:25:16.326 "nvme_iov_md": false 00:25:16.326 }, 00:25:16.326 "driver_specific": { 00:25:16.326 "ftl": { 00:25:16.326 "base_bdev": "7a8e31a0-8f89-4a64-b95d-cf41ead24703", 00:25:16.326 "cache": "nvc0n1p0" 00:25:16.326 } 00:25:16.326 } 00:25:16.326 } 00:25:16.326 ]' 00:25:16.326 20:53:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:16.326 20:53:11 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:16.326 20:53:11 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:16.585 [2024-11-26 20:53:11.484079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.484143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.585 [2024-11-26 20:53:11.484163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.585 [2024-11-26 20:53:11.484176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.484217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:16.585 [2024-11-26 20:53:11.488444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.488476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.585 [2024-11-26 20:53:11.488498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.204 ms 00:25:16.585 [2024-11-26 20:53:11.488508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.489053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.489073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.585 [2024-11-26 20:53:11.489090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:25:16.585 [2024-11-26 20:53:11.489101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.492013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.492038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.585 [2024-11-26 20:53:11.492055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:25:16.585 [2024-11-26 20:53:11.492066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.497880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.497912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.585 [2024-11-26 20:53:11.497930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.754 ms 00:25:16.585 [2024-11-26 20:53:11.497941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.536431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.536476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.585 [2024-11-26 20:53:11.536502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.386 ms 00:25:16.585 [2024-11-26 20:53:11.536513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.559383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.559429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.585 [2024-11-26 20:53:11.559466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.768 ms 00:25:16.585 [2024-11-26 20:53:11.559476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.585 [2024-11-26 20:53:11.559744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.585 [2024-11-26 20:53:11.559760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.585 [2024-11-26 20:53:11.559774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:25:16.585 [2024-11-26 20:53:11.559784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.846 [2024-11-26 20:53:11.598222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.846 [2024-11-26 20:53:11.598269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.846 [2024-11-26 20:53:11.598291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.389 ms 00:25:16.846 [2024-11-26 20:53:11.598302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.846 [2024-11-26 20:53:11.636547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.846 [2024-11-26 20:53:11.636595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.846 [2024-11-26 20:53:11.636634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.128 ms 00:25:16.846 [2024-11-26 20:53:11.636645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.846 [2024-11-26 20:53:11.673811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.846 [2024-11-26 20:53:11.673860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.846 [2024-11-26 20:53:11.673883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.060 ms 00:25:16.846 [2024-11-26 20:53:11.673895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.846 [2024-11-26 20:53:11.711293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.846 [2024-11-26 20:53:11.711463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.846 [2024-11-26 20:53:11.711492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.215 ms 00:25:16.846 [2024-11-26 20:53:11.711502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.846 [2024-11-26 20:53:11.711598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.846 [2024-11-26 20:53:11.711633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.711988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.846 [2024-11-26 20:53:11.712527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.847 [2024-11-26 20:53:11.712916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.847 [2024-11-26 20:53:11.712931] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:16.847 [2024-11-26 20:53:11.712943] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:16.847 [2024-11-26 20:53:11.712955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:16.847 [2024-11-26 20:53:11.712969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:16.847 [2024-11-26 20:53:11.712982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:16.847 [2024-11-26 20:53:11.712992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.847 [2024-11-26 20:53:11.713005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.847 [2024-11-26 20:53:11.713016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.847 [2024-11-26 20:53:11.713027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.847 [2024-11-26 20:53:11.713036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.847 [2024-11-26 20:53:11.713049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.847 [2024-11-26 20:53:11.713059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.847 [2024-11-26 20:53:11.713073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:25:16.847 [2024-11-26 20:53:11.713083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.734084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.847 [2024-11-26 20:53:11.734124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.847 [2024-11-26 20:53:11.734143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.961 ms 00:25:16.847 [2024-11-26 20:53:11.734154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.734726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.847 [2024-11-26 20:53:11.734749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.847 [2024-11-26 20:53:11.734764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:25:16.847 [2024-11-26 20:53:11.734775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.806262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.847 [2024-11-26 20:53:11.806311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.847 [2024-11-26 20:53:11.806328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.847 [2024-11-26 20:53:11.806356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.806516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.847 [2024-11-26 20:53:11.806530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.847 [2024-11-26 20:53:11.806544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.847 [2024-11-26 20:53:11.806554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.806649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.847 [2024-11-26 20:53:11.806667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.847 [2024-11-26 20:53:11.806683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.847 [2024-11-26 20:53:11.806693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.847 [2024-11-26 20:53:11.806730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.847 [2024-11-26 20:53:11.806741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.847 [2024-11-26 20:53:11.806754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.847 [2024-11-26 20:53:11.806764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:11.943405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:11.943468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.107 [2024-11-26 20:53:11.943486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:11.943497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:12.048595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:12.048671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.107 [2024-11-26 20:53:12.048690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:12.048701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:12.048866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:12.048879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.107 [2024-11-26 20:53:12.048900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:12.048910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:12.048974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:12.048985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.107 [2024-11-26 20:53:12.048998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:12.049008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:12.049136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:12.049149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.107 [2024-11-26 20:53:12.049163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:12.049176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.107 [2024-11-26 20:53:12.049239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.107 [2024-11-26 20:53:12.049251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:17.107 [2024-11-26 20:53:12.049264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.107 [2024-11-26 20:53:12.049275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.108 [2024-11-26 20:53:12.049335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.108 [2024-11-26 20:53:12.049346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.108 [2024-11-26 20:53:12.049361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.108 [2024-11-26 20:53:12.049375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.108 [2024-11-26 20:53:12.049433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.108 [2024-11-26 20:53:12.049445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.108 [2024-11-26 20:53:12.049458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.108 [2024-11-26 20:53:12.049468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.108 [2024-11-26 20:53:12.049673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.560 ms, result 0 00:25:17.108 true 00:25:17.108 20:53:12 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78836 00:25:17.108 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78836 ']' 00:25:17.108 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78836 00:25:17.108 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:17.108 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.108 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78836 00:25:17.367 killing process with pid 78836 00:25:17.367 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.367 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.367 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78836' 00:25:17.367 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78836 00:25:17.367 20:53:12 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78836 00:25:22.722 20:53:17 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:23.654 65536+0 records in 00:25:23.654 65536+0 records out 00:25:23.654 268435456 bytes (268 MB, 256 MiB) copied, 1.10307 s, 243 MB/s 00:25:23.654 20:53:18 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:23.912 [2024-11-26 20:53:18.683955] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:23.912 [2024-11-26 20:53:18.684132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79047 ] 00:25:23.912 [2024-11-26 20:53:18.878097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.171 [2024-11-26 20:53:19.046477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.429 [2024-11-26 20:53:19.410477] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:24.429 [2024-11-26 20:53:19.410546] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:24.688 [2024-11-26 20:53:19.574539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.574595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:24.688 [2024-11-26 20:53:19.574626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:24.688 [2024-11-26 20:53:19.574639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.578164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.578205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.688 [2024-11-26 20:53:19.578219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:25:24.688 [2024-11-26 20:53:19.578246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.578364] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:24.688 [2024-11-26 20:53:19.579396] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:24.688 [2024-11-26 20:53:19.579431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.579443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.688 [2024-11-26 20:53:19.579454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:25:24.688 [2024-11-26 20:53:19.579465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.581220] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:24.688 [2024-11-26 20:53:19.602386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.602426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:24.688 [2024-11-26 20:53:19.602443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.167 ms 00:25:24.688 [2024-11-26 20:53:19.602453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.602564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.602579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:24.688 [2024-11-26 20:53:19.602591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:24.688 [2024-11-26 20:53:19.602601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.609775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.609927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.688 [2024-11-26 20:53:19.609948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.113 ms 00:25:24.688 [2024-11-26 20:53:19.609959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.610080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.610095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.688 [2024-11-26 20:53:19.610106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:24.688 [2024-11-26 20:53:19.610116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.688 [2024-11-26 20:53:19.610151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.688 [2024-11-26 20:53:19.610162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:24.688 [2024-11-26 20:53:19.610173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:24.688 [2024-11-26 20:53:19.610184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.689 [2024-11-26 20:53:19.610210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:24.689 [2024-11-26 20:53:19.615173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.689 [2024-11-26 20:53:19.615206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.689 [2024-11-26 20:53:19.615218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.971 ms 00:25:24.689 [2024-11-26 20:53:19.615245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.689 [2024-11-26 20:53:19.615322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.689 [2024-11-26 20:53:19.615336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:24.689 [2024-11-26 20:53:19.615348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:24.689 [2024-11-26 20:53:19.615360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.689 [2024-11-26 20:53:19.615391] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:24.689 [2024-11-26 20:53:19.615415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:24.689 [2024-11-26 20:53:19.615455] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:24.689 [2024-11-26 20:53:19.615475] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:24.689 [2024-11-26 20:53:19.615582] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:24.689 [2024-11-26 20:53:19.615595] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:24.689 [2024-11-26 20:53:19.615609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:24.689 [2024-11-26 20:53:19.615626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:24.689 [2024-11-26 20:53:19.615682] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:24.689 [2024-11-26 20:53:19.615696] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:24.689 [2024-11-26 20:53:19.615707] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:24.689 [2024-11-26 20:53:19.615717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:24.689 [2024-11-26 20:53:19.615728] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:24.689 [2024-11-26 20:53:19.615740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.689 [2024-11-26 20:53:19.615751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:24.689 [2024-11-26 20:53:19.615763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:25:24.689 [2024-11-26 20:53:19.615773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.689 [2024-11-26 20:53:19.615860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.689 [2024-11-26 20:53:19.615877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:24.689 [2024-11-26 20:53:19.615888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:24.689 [2024-11-26 20:53:19.615900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.689 [2024-11-26 20:53:19.616001] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:24.689 [2024-11-26 20:53:19.616016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:24.689 [2024-11-26 20:53:19.616027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:24.689 [2024-11-26 20:53:19.616060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:24.689 [2024-11-26 20:53:19.616092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.689 [2024-11-26 20:53:19.616113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:24.689 [2024-11-26 20:53:19.616136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:24.689 [2024-11-26 20:53:19.616147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:24.689 [2024-11-26 20:53:19.616157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:24.689 [2024-11-26 20:53:19.616167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:24.689 [2024-11-26 20:53:19.616177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:24.689 [2024-11-26 20:53:19.616198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:24.689 [2024-11-26 20:53:19.616230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:24.689 [2024-11-26 20:53:19.616261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:24.689 [2024-11-26 20:53:19.616291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:24.689 [2024-11-26 20:53:19.616320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:24.689 [2024-11-26 20:53:19.616350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.689 [2024-11-26 20:53:19.616370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:24.689 [2024-11-26 20:53:19.616381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:24.689 [2024-11-26 20:53:19.616391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:24.689 [2024-11-26 20:53:19.616401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:24.689 [2024-11-26 20:53:19.616411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:24.689 [2024-11-26 20:53:19.616420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:24.689 [2024-11-26 20:53:19.616440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:24.689 [2024-11-26 20:53:19.616450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616461] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:24.689 [2024-11-26 20:53:19.616473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:24.689 [2024-11-26 20:53:19.616488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:24.689 [2024-11-26 20:53:19.616510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:24.689 [2024-11-26 20:53:19.616520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:24.689 [2024-11-26 20:53:19.616530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:24.689 [2024-11-26 20:53:19.616540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:24.689 [2024-11-26 20:53:19.616550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:24.689 [2024-11-26 20:53:19.616561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:24.689 [2024-11-26 20:53:19.616573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:24.690 [2024-11-26 20:53:19.616586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:24.690 [2024-11-26 20:53:19.616610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:24.690 [2024-11-26 20:53:19.616621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:24.690 [2024-11-26 20:53:19.616645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:24.690 [2024-11-26 20:53:19.616657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:24.690 [2024-11-26 20:53:19.616669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:24.690 [2024-11-26 20:53:19.616680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:24.690 [2024-11-26 20:53:19.616691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:24.690 [2024-11-26 20:53:19.616703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:24.690 [2024-11-26 20:53:19.616714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:24.690 [2024-11-26 20:53:19.616771] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:24.690 [2024-11-26 20:53:19.616783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.690 [2024-11-26 20:53:19.616807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:24.690 [2024-11-26 20:53:19.616818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:24.690 [2024-11-26 20:53:19.616829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:24.690 [2024-11-26 20:53:19.616841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.690 [2024-11-26 20:53:19.616858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:24.690 [2024-11-26 20:53:19.616869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:25:24.690 [2024-11-26 20:53:19.616880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.690 [2024-11-26 20:53:19.658892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.690 [2024-11-26 20:53:19.659082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.690 [2024-11-26 20:53:19.659107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.946 ms 00:25:24.690 [2024-11-26 20:53:19.659136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.690 [2024-11-26 20:53:19.659317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.690 [2024-11-26 20:53:19.659331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:24.690 [2024-11-26 20:53:19.659343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:24.690 [2024-11-26 20:53:19.659354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.718316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.718362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.950 [2024-11-26 20:53:19.718382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.935 ms 00:25:24.950 [2024-11-26 20:53:19.718393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.718524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.718538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.950 [2024-11-26 20:53:19.718549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:24.950 [2024-11-26 20:53:19.718560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.719070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.719086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.950 [2024-11-26 20:53:19.719106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:25:24.950 [2024-11-26 20:53:19.719117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.719248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.719269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.950 [2024-11-26 20:53:19.719281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:24.950 [2024-11-26 20:53:19.719292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.739995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.740160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.950 [2024-11-26 20:53:19.740256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.675 ms 00:25:24.950 [2024-11-26 20:53:19.740297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.760792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:24.950 [2024-11-26 20:53:19.761002] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:24.950 [2024-11-26 20:53:19.761160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.761199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:24.950 [2024-11-26 20:53:19.761233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.685 ms 00:25:24.950 [2024-11-26 20:53:19.761267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.792539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.792717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:24.950 [2024-11-26 20:53:19.792866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.100 ms 00:25:24.950 [2024-11-26 20:53:19.792910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.811756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.811909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:24.950 [2024-11-26 20:53:19.811991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.732 ms 00:25:24.950 [2024-11-26 20:53:19.812030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.831219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.831374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:24.950 [2024-11-26 20:53:19.831457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.082 ms 00:25:24.950 [2024-11-26 20:53:19.831496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.832480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.832516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:24.950 [2024-11-26 20:53:19.832531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:25:24.950 [2024-11-26 20:53:19.832542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.923047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.950 [2024-11-26 20:53:19.923280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:24.950 [2024-11-26 20:53:19.923324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.473 ms 00:25:24.950 [2024-11-26 20:53:19.923337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.950 [2024-11-26 20:53:19.935080] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:25.209 [2024-11-26 20:53:19.952584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.209 [2024-11-26 20:53:19.952840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:25.209 [2024-11-26 20:53:19.952870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.110 ms 00:25:25.209 [2024-11-26 20:53:19.952884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.209 [2024-11-26 20:53:19.953062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.953077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:25.210 [2024-11-26 20:53:19.953089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:25.210 [2024-11-26 20:53:19.953099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.953160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.953172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:25.210 [2024-11-26 20:53:19.953184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:25.210 [2024-11-26 20:53:19.953194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.953231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.953247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:25.210 [2024-11-26 20:53:19.953259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:25.210 [2024-11-26 20:53:19.953269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.953306] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:25.210 [2024-11-26 20:53:19.953318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.953328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:25.210 [2024-11-26 20:53:19.953340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:25.210 [2024-11-26 20:53:19.953351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.991628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.991784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:25.210 [2024-11-26 20:53:19.991823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.255 ms 00:25:25.210 [2024-11-26 20:53:19.991835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.991991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.210 [2024-11-26 20:53:19.992009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:25.210 [2024-11-26 20:53:19.992022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:25.210 [2024-11-26 20:53:19.992033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.210 [2024-11-26 20:53:19.993047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.210 [2024-11-26 20:53:19.997646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.190 ms, result 0 00:25:25.210 [2024-11-26 20:53:19.998452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:25.210 [2024-11-26 20:53:20.019549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:26.249  [2024-11-26T20:53:22.202Z] Copying: 28/256 [MB] (28 MBps) [2024-11-26T20:53:23.134Z] Copying: 56/256 [MB] (27 MBps) [2024-11-26T20:53:24.067Z] Copying: 82/256 [MB] (26 MBps) [2024-11-26T20:53:25.441Z] Copying: 108/256 [MB] (25 MBps) [2024-11-26T20:53:26.375Z] Copying: 133/256 [MB] (25 MBps) [2024-11-26T20:53:27.310Z] Copying: 159/256 [MB] (26 MBps) [2024-11-26T20:53:28.244Z] Copying: 186/256 [MB] (26 MBps) [2024-11-26T20:53:29.176Z] Copying: 212/256 [MB] (26 MBps) [2024-11-26T20:53:29.743Z] Copying: 239/256 [MB] (27 MBps) [2024-11-26T20:53:29.743Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-26 20:53:29.652998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.749 [2024-11-26 20:53:29.667325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.667367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:34.749 [2024-11-26 20:53:29.667382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:34.749 [2024-11-26 20:53:29.667415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.749 [2024-11-26 20:53:29.667440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:34.749 [2024-11-26 20:53:29.671817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.671958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:34.749 [2024-11-26 20:53:29.672039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.359 ms 00:25:34.749 [2024-11-26 20:53:29.672076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.749 [2024-11-26 20:53:29.673823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.673977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:34.749 [2024-11-26 20:53:29.673999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:25:34.749 [2024-11-26 20:53:29.674012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.749 [2024-11-26 20:53:29.680287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.680331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:34.749 [2024-11-26 20:53:29.680344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:25:34.749 [2024-11-26 20:53:29.680371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.749 [2024-11-26 20:53:29.686481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.686514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:34.749 [2024-11-26 20:53:29.686526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.051 ms 00:25:34.749 [2024-11-26 20:53:29.686552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.749 [2024-11-26 20:53:29.723343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.749 [2024-11-26 20:53:29.723379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:34.749 [2024-11-26 20:53:29.723393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.744 ms 00:25:34.749 [2024-11-26 20:53:29.723418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.745211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.745255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.009 [2024-11-26 20:53:29.745274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.736 ms 00:25:35.009 [2024-11-26 20:53:29.745284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.745420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.745434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.009 [2024-11-26 20:53:29.745445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:35.009 [2024-11-26 20:53:29.745467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.782305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.782342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.009 [2024-11-26 20:53:29.782355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.819 ms 00:25:35.009 [2024-11-26 20:53:29.782365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.819103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.819138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.009 [2024-11-26 20:53:29.819151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.680 ms 00:25:35.009 [2024-11-26 20:53:29.819160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.855058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.855218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.009 [2024-11-26 20:53:29.855239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.843 ms 00:25:35.009 [2024-11-26 20:53:29.855250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.891938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.009 [2024-11-26 20:53:29.891974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.009 [2024-11-26 20:53:29.891987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.606 ms 00:25:35.009 [2024-11-26 20:53:29.891997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.009 [2024-11-26 20:53:29.892054] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.009 [2024-11-26 20:53:29.892071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.009 [2024-11-26 20:53:29.892482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.892993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.010 [2024-11-26 20:53:29.893204] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.010 [2024-11-26 20:53:29.893214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:35.010 [2024-11-26 20:53:29.893225] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:35.010 [2024-11-26 20:53:29.893236] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.010 [2024-11-26 20:53:29.893245] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.010 [2024-11-26 20:53:29.893256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.010 [2024-11-26 20:53:29.893265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.010 [2024-11-26 20:53:29.893275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.010 [2024-11-26 20:53:29.893285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.010 [2024-11-26 20:53:29.893295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.010 [2024-11-26 20:53:29.893304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.010 [2024-11-26 20:53:29.893314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.010 [2024-11-26 20:53:29.893328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.010 [2024-11-26 20:53:29.893339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:25:35.010 [2024-11-26 20:53:29.893349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.913878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.010 [2024-11-26 20:53:29.913911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.010 [2024-11-26 20:53:29.913924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.509 ms 00:25:35.010 [2024-11-26 20:53:29.913934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.914475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.010 [2024-11-26 20:53:29.914491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.010 [2024-11-26 20:53:29.914502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:35.010 [2024-11-26 20:53:29.914512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.971139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.010 [2024-11-26 20:53:29.971303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.010 [2024-11-26 20:53:29.971324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.010 [2024-11-26 20:53:29.971335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.971425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.010 [2024-11-26 20:53:29.971437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.010 [2024-11-26 20:53:29.971448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.010 [2024-11-26 20:53:29.971458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.971510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.010 [2024-11-26 20:53:29.971523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.010 [2024-11-26 20:53:29.971534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.010 [2024-11-26 20:53:29.971544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.010 [2024-11-26 20:53:29.971564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.011 [2024-11-26 20:53:29.971579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.011 [2024-11-26 20:53:29.971589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.011 [2024-11-26 20:53:29.971599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.100380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.100643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.281 [2024-11-26 20:53:30.100666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.100678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.204450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.204704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.281 [2024-11-26 20:53:30.204729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.204742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.204868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.204880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.281 [2024-11-26 20:53:30.204892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.204902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.204931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.204943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.281 [2024-11-26 20:53:30.204959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.204970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.205107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.205121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.281 [2024-11-26 20:53:30.205133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.205143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.205187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.205200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.281 [2024-11-26 20:53:30.205210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.205226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.205266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.205277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.281 [2024-11-26 20:53:30.205288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.281 [2024-11-26 20:53:30.205297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.281 [2024-11-26 20:53:30.205341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.281 [2024-11-26 20:53:30.205353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.282 [2024-11-26 20:53:30.205368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.282 [2024-11-26 20:53:30.205379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.282 [2024-11-26 20:53:30.205516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.190 ms, result 0 00:25:36.662 00:25:36.662 00:25:36.662 20:53:31 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79172 00:25:36.662 20:53:31 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:36.662 20:53:31 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79172 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79172 ']' 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.662 20:53:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:36.662 [2024-11-26 20:53:31.482471] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:36.662 [2024-11-26 20:53:31.482673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79172 ] 00:25:36.662 [2024-11-26 20:53:31.654440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.921 [2024-11-26 20:53:31.772124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.856 20:53:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.856 20:53:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:37.856 20:53:32 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:38.114 [2024-11-26 20:53:32.927840] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.114 [2024-11-26 20:53:32.927906] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:38.373 [2024-11-26 20:53:33.109982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.110048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:38.373 [2024-11-26 20:53:33.110070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:38.373 [2024-11-26 20:53:33.110082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.114181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.114224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.373 [2024-11-26 20:53:33.114240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.077 ms 00:25:38.373 [2024-11-26 20:53:33.114250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.114362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:38.373 [2024-11-26 20:53:33.115358] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:38.373 [2024-11-26 20:53:33.115395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.115406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.373 [2024-11-26 20:53:33.115420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:25:38.373 [2024-11-26 20:53:33.115433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.117004] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:38.373 [2024-11-26 20:53:33.137230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.137399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:38.373 [2024-11-26 20:53:33.137422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 00:25:38.373 [2024-11-26 20:53:33.137439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.137579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.137599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:38.373 [2024-11-26 20:53:33.137632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:38.373 [2024-11-26 20:53:33.137649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.144593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.144773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.373 [2024-11-26 20:53:33.144796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:25:38.373 [2024-11-26 20:53:33.144812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.144967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.144988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.373 [2024-11-26 20:53:33.145000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:38.373 [2024-11-26 20:53:33.145022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.145051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.145068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:38.373 [2024-11-26 20:53:33.145079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:38.373 [2024-11-26 20:53:33.145095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.145125] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:38.373 [2024-11-26 20:53:33.149986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.150020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.373 [2024-11-26 20:53:33.150038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:25:38.373 [2024-11-26 20:53:33.150048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.150132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.150144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:38.373 [2024-11-26 20:53:33.150166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:38.373 [2024-11-26 20:53:33.150176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.150206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:38.373 [2024-11-26 20:53:33.150230] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:38.373 [2024-11-26 20:53:33.150282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:38.373 [2024-11-26 20:53:33.150303] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:38.373 [2024-11-26 20:53:33.150401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:38.373 [2024-11-26 20:53:33.150415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:38.373 [2024-11-26 20:53:33.150443] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:38.373 [2024-11-26 20:53:33.150457] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:38.373 [2024-11-26 20:53:33.150474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:38.373 [2024-11-26 20:53:33.150485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:38.373 [2024-11-26 20:53:33.150500] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:38.373 [2024-11-26 20:53:33.150510] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:38.373 [2024-11-26 20:53:33.150530] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:38.373 [2024-11-26 20:53:33.150541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.150556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:38.373 [2024-11-26 20:53:33.150567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:25:38.373 [2024-11-26 20:53:33.150588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.150690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.373 [2024-11-26 20:53:33.150709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:38.373 [2024-11-26 20:53:33.150719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:38.373 [2024-11-26 20:53:33.150741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.373 [2024-11-26 20:53:33.150834] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:38.373 [2024-11-26 20:53:33.150852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:38.373 [2024-11-26 20:53:33.150863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.373 [2024-11-26 20:53:33.150878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.373 [2024-11-26 20:53:33.150889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:38.373 [2024-11-26 20:53:33.150904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:38.373 [2024-11-26 20:53:33.150913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:38.373 [2024-11-26 20:53:33.150934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:38.373 [2024-11-26 20:53:33.150945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:38.373 [2024-11-26 20:53:33.150960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.373 [2024-11-26 20:53:33.150970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:38.374 [2024-11-26 20:53:33.150985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:38.374 [2024-11-26 20:53:33.150994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:38.374 [2024-11-26 20:53:33.151009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:38.374 [2024-11-26 20:53:33.151019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:38.374 [2024-11-26 20:53:33.151033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:38.374 [2024-11-26 20:53:33.151058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:38.374 [2024-11-26 20:53:33.151104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:38.374 [2024-11-26 20:53:33.151148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:38.374 [2024-11-26 20:53:33.151182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:38.374 [2024-11-26 20:53:33.151219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:38.374 [2024-11-26 20:53:33.151253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.374 [2024-11-26 20:53:33.151278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:38.374 [2024-11-26 20:53:33.151292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:38.374 [2024-11-26 20:53:33.151302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:38.374 [2024-11-26 20:53:33.151318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:38.374 [2024-11-26 20:53:33.151327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:38.374 [2024-11-26 20:53:33.151345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:38.374 [2024-11-26 20:53:33.151369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:38.374 [2024-11-26 20:53:33.151379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151398] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:38.374 [2024-11-26 20:53:33.151414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:38.374 [2024-11-26 20:53:33.151428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:38.374 [2024-11-26 20:53:33.151454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:38.374 [2024-11-26 20:53:33.151464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:38.374 [2024-11-26 20:53:33.151480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:38.374 [2024-11-26 20:53:33.151490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:38.374 [2024-11-26 20:53:33.151504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:38.374 [2024-11-26 20:53:33.151515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:38.374 [2024-11-26 20:53:33.151530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:38.374 [2024-11-26 20:53:33.151543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:38.374 [2024-11-26 20:53:33.151575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:38.374 [2024-11-26 20:53:33.151592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:38.374 [2024-11-26 20:53:33.151603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:38.374 [2024-11-26 20:53:33.151629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:38.374 [2024-11-26 20:53:33.151648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:38.374 [2024-11-26 20:53:33.151663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:38.374 [2024-11-26 20:53:33.151674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:38.374 [2024-11-26 20:53:33.151707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:38.374 [2024-11-26 20:53:33.151719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:38.374 [2024-11-26 20:53:33.151793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:38.374 [2024-11-26 20:53:33.151806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:38.374 [2024-11-26 20:53:33.151856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:38.374 [2024-11-26 20:53:33.151872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:38.374 [2024-11-26 20:53:33.151883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:38.374 [2024-11-26 20:53:33.151899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.151911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:38.374 [2024-11-26 20:53:33.151926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:25:38.374 [2024-11-26 20:53:33.151941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.195820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.196020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.374 [2024-11-26 20:53:33.196055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.809 ms 00:25:38.374 [2024-11-26 20:53:33.196073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.196233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.196246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:38.374 [2024-11-26 20:53:33.196262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:38.374 [2024-11-26 20:53:33.196273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.247117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.247167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.374 [2024-11-26 20:53:33.247187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.811 ms 00:25:38.374 [2024-11-26 20:53:33.247198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.247317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.247330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.374 [2024-11-26 20:53:33.247346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:38.374 [2024-11-26 20:53:33.247357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.247836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.247856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.374 [2024-11-26 20:53:33.247872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:25:38.374 [2024-11-26 20:53:33.247883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.248035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.248055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.374 [2024-11-26 20:53:33.248073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:25:38.374 [2024-11-26 20:53:33.248084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.271521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.374 [2024-11-26 20:53:33.271563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.374 [2024-11-26 20:53:33.271586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.402 ms 00:25:38.374 [2024-11-26 20:53:33.271597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.374 [2024-11-26 20:53:33.301145] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:38.374 [2024-11-26 20:53:33.301186] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:38.374 [2024-11-26 20:53:33.301207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.375 [2024-11-26 20:53:33.301219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:38.375 [2024-11-26 20:53:33.301233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.438 ms 00:25:38.375 [2024-11-26 20:53:33.301254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.375 [2024-11-26 20:53:33.331997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.375 [2024-11-26 20:53:33.332040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:38.375 [2024-11-26 20:53:33.332058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.651 ms 00:25:38.375 [2024-11-26 20:53:33.332070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.375 [2024-11-26 20:53:33.351050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.375 [2024-11-26 20:53:33.351090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:38.375 [2024-11-26 20:53:33.351110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.888 ms 00:25:38.375 [2024-11-26 20:53:33.351120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.369978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.370131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:38.633 [2024-11-26 20:53:33.370158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.759 ms 00:25:38.633 [2024-11-26 20:53:33.370168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.371156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.371189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:38.633 [2024-11-26 20:53:33.371208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:25:38.633 [2024-11-26 20:53:33.371219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.464672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.464729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:38.633 [2024-11-26 20:53:33.464752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.412 ms 00:25:38.633 [2024-11-26 20:53:33.464763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.476481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:38.633 [2024-11-26 20:53:33.493597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.493721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:38.633 [2024-11-26 20:53:33.493737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.662 ms 00:25:38.633 [2024-11-26 20:53:33.493753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.493882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.493902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:38.633 [2024-11-26 20:53:33.493914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:38.633 [2024-11-26 20:53:33.493932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.493988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.494005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:38.633 [2024-11-26 20:53:33.494016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:38.633 [2024-11-26 20:53:33.494037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.494064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.494080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:38.633 [2024-11-26 20:53:33.494091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:38.633 [2024-11-26 20:53:33.494105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.494150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:38.633 [2024-11-26 20:53:33.494173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.494189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:38.633 [2024-11-26 20:53:33.494205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:38.633 [2024-11-26 20:53:33.494221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.532638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.532794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:38.633 [2024-11-26 20:53:33.532880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.379 ms 00:25:38.633 [2024-11-26 20:53:33.532921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.533066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.633 [2024-11-26 20:53:33.533155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:38.633 [2024-11-26 20:53:33.533210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:38.633 [2024-11-26 20:53:33.533243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.633 [2024-11-26 20:53:33.534392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:38.633 [2024-11-26 20:53:33.539048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 423.997 ms, result 0 00:25:38.633 [2024-11-26 20:53:33.540379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.633 Some configs were skipped because the RPC state that can call them passed over. 00:25:38.633 20:53:33 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:38.890 [2024-11-26 20:53:33.828456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.890 [2024-11-26 20:53:33.828541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:38.890 [2024-11-26 20:53:33.828561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:25:38.890 [2024-11-26 20:53:33.828579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.890 [2024-11-26 20:53:33.828642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.596 ms, result 0 00:25:38.890 true 00:25:38.890 20:53:33 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:39.148 [2024-11-26 20:53:34.032477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.148 [2024-11-26 20:53:34.032535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:39.148 [2024-11-26 20:53:34.032560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:25:39.148 [2024-11-26 20:53:34.032573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.148 [2024-11-26 20:53:34.032647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.378 ms, result 0 00:25:39.148 true 00:25:39.148 20:53:34 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79172 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79172 ']' 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79172 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79172 00:25:39.148 killing process with pid 79172 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79172' 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79172 00:25:39.148 20:53:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79172 00:25:40.519 [2024-11-26 20:53:35.240113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.240169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:40.519 [2024-11-26 20:53:35.240186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.519 [2024-11-26 20:53:35.240198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.240225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:40.519 [2024-11-26 20:53:35.244558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.244592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:40.519 [2024-11-26 20:53:35.244623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:25:40.519 [2024-11-26 20:53:35.244634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.244903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.244917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:40.519 [2024-11-26 20:53:35.244931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:25:40.519 [2024-11-26 20:53:35.244941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.248312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.248367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:40.519 [2024-11-26 20:53:35.248383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:25:40.519 [2024-11-26 20:53:35.248394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.254220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.254253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:40.519 [2024-11-26 20:53:35.254268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:25:40.519 [2024-11-26 20:53:35.254294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.269597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.519 [2024-11-26 20:53:35.269779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:40.519 [2024-11-26 20:53:35.269809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.240 ms 00:25:40.519 [2024-11-26 20:53:35.269819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.519 [2024-11-26 20:53:35.280270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.280310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:40.520 [2024-11-26 20:53:35.280326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.369 ms 00:25:40.520 [2024-11-26 20:53:35.280337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.280485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.280499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.520 [2024-11-26 20:53:35.280513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:40.520 [2024-11-26 20:53:35.280523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.295880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.295914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:40.520 [2024-11-26 20:53:35.295936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.331 ms 00:25:40.520 [2024-11-26 20:53:35.295947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.311292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.311325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:40.520 [2024-11-26 20:53:35.311349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.285 ms 00:25:40.520 [2024-11-26 20:53:35.311359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.326129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.326163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.520 [2024-11-26 20:53:35.326182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.707 ms 00:25:40.520 [2024-11-26 20:53:35.326192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.340818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.520 [2024-11-26 20:53:35.340852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.520 [2024-11-26 20:53:35.340871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.539 ms 00:25:40.520 [2024-11-26 20:53:35.340881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.520 [2024-11-26 20:53:35.340936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.520 [2024-11-26 20:53:35.340955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.340980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.340992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.520 [2024-11-26 20:53:35.341949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.341960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.341976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.341987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.521 [2024-11-26 20:53:35.342383] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.521 [2024-11-26 20:53:35.342404] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:40.521 [2024-11-26 20:53:35.342422] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:40.521 [2024-11-26 20:53:35.342436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:40.521 [2024-11-26 20:53:35.342447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:40.521 [2024-11-26 20:53:35.342462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:40.521 [2024-11-26 20:53:35.342473] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.521 [2024-11-26 20:53:35.342488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.521 [2024-11-26 20:53:35.342498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.521 [2024-11-26 20:53:35.342513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.521 [2024-11-26 20:53:35.342522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.521 [2024-11-26 20:53:35.342536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.521 [2024-11-26 20:53:35.342547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.521 [2024-11-26 20:53:35.342564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:25:40.521 [2024-11-26 20:53:35.342579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.362739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.521 [2024-11-26 20:53:35.362773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.521 [2024-11-26 20:53:35.362796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.127 ms 00:25:40.521 [2024-11-26 20:53:35.362807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.363392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.521 [2024-11-26 20:53:35.363408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.521 [2024-11-26 20:53:35.363430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:25:40.521 [2024-11-26 20:53:35.363440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.436197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.521 [2024-11-26 20:53:35.436241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.521 [2024-11-26 20:53:35.436261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.521 [2024-11-26 20:53:35.436272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.436385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.521 [2024-11-26 20:53:35.436398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.521 [2024-11-26 20:53:35.436420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.521 [2024-11-26 20:53:35.436430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.436495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.521 [2024-11-26 20:53:35.436509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.521 [2024-11-26 20:53:35.436530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.521 [2024-11-26 20:53:35.436540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.521 [2024-11-26 20:53:35.436590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.521 [2024-11-26 20:53:35.436602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.521 [2024-11-26 20:53:35.436635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.521 [2024-11-26 20:53:35.436652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.564639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.564700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.780 [2024-11-26 20:53:35.564720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.564731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.665385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.665439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.780 [2024-11-26 20:53:35.665458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.665468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.665580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.665592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.780 [2024-11-26 20:53:35.665608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.665636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.665686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.665697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.780 [2024-11-26 20:53:35.665710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.665720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.665862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.665876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.780 [2024-11-26 20:53:35.665889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.665899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.665941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.665954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.780 [2024-11-26 20:53:35.665967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.665977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.666032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.666044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.780 [2024-11-26 20:53:35.666065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.666076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.666125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.780 [2024-11-26 20:53:35.666138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.780 [2024-11-26 20:53:35.666153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.780 [2024-11-26 20:53:35.666165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.780 [2024-11-26 20:53:35.666320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.174 ms, result 0 00:25:42.155 20:53:36 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:42.155 20:53:36 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:42.155 [2024-11-26 20:53:36.839826] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:42.155 [2024-11-26 20:53:36.840232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79247 ] 00:25:42.155 [2024-11-26 20:53:37.019185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.155 [2024-11-26 20:53:37.137711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.749 [2024-11-26 20:53:37.498764] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.749 [2024-11-26 20:53:37.498846] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.749 [2024-11-26 20:53:37.661132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.661359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.750 [2024-11-26 20:53:37.661385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:42.750 [2024-11-26 20:53:37.661397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.664901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.664941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.750 [2024-11-26 20:53:37.664956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:25:42.750 [2024-11-26 20:53:37.664966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.665080] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.750 [2024-11-26 20:53:37.666133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.750 [2024-11-26 20:53:37.666169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.666182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.750 [2024-11-26 20:53:37.666195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:25:42.750 [2024-11-26 20:53:37.666206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.667818] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:42.750 [2024-11-26 20:53:37.688861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.688904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:42.750 [2024-11-26 20:53:37.688921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.043 ms 00:25:42.750 [2024-11-26 20:53:37.688932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.689054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.689069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:42.750 [2024-11-26 20:53:37.689081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:42.750 [2024-11-26 20:53:37.689091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.696212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.696245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.750 [2024-11-26 20:53:37.696258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.074 ms 00:25:42.750 [2024-11-26 20:53:37.696269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.696380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.696396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.750 [2024-11-26 20:53:37.696408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:42.750 [2024-11-26 20:53:37.696422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.696456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.696467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.750 [2024-11-26 20:53:37.696479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:42.750 [2024-11-26 20:53:37.696488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.696515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:42.750 [2024-11-26 20:53:37.701259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.701293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.750 [2024-11-26 20:53:37.701306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:25:42.750 [2024-11-26 20:53:37.701317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.701391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.701404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.750 [2024-11-26 20:53:37.701416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:42.750 [2024-11-26 20:53:37.701430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.701455] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:42.750 [2024-11-26 20:53:37.701477] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:42.750 [2024-11-26 20:53:37.701515] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:42.750 [2024-11-26 20:53:37.701534] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:42.750 [2024-11-26 20:53:37.701641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:42.750 [2024-11-26 20:53:37.701656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.750 [2024-11-26 20:53:37.701674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:42.750 [2024-11-26 20:53:37.701688] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.750 [2024-11-26 20:53:37.701701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.750 [2024-11-26 20:53:37.701712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:42.750 [2024-11-26 20:53:37.701722] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.750 [2024-11-26 20:53:37.701732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:42.750 [2024-11-26 20:53:37.701742] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:42.750 [2024-11-26 20:53:37.701753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.701764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.750 [2024-11-26 20:53:37.701774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:42.750 [2024-11-26 20:53:37.701784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.701866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.750 [2024-11-26 20:53:37.701877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.750 [2024-11-26 20:53:37.701887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:42.750 [2024-11-26 20:53:37.701898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.750 [2024-11-26 20:53:37.701992] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.750 [2024-11-26 20:53:37.702005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.750 [2024-11-26 20:53:37.702016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.750 [2024-11-26 20:53:37.702046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.750 [2024-11-26 20:53:37.702075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.750 [2024-11-26 20:53:37.702094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.750 [2024-11-26 20:53:37.702114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:42.750 [2024-11-26 20:53:37.702124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.750 [2024-11-26 20:53:37.702134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.750 [2024-11-26 20:53:37.702143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:42.750 [2024-11-26 20:53:37.702152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.750 [2024-11-26 20:53:37.702171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.750 [2024-11-26 20:53:37.702200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.750 [2024-11-26 20:53:37.702228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.750 [2024-11-26 20:53:37.702255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.750 [2024-11-26 20:53:37.702283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.750 [2024-11-26 20:53:37.702303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.750 [2024-11-26 20:53:37.702312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:42.750 [2024-11-26 20:53:37.702321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.750 [2024-11-26 20:53:37.702330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.750 [2024-11-26 20:53:37.702339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:42.750 [2024-11-26 20:53:37.702349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.750 [2024-11-26 20:53:37.702358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:42.750 [2024-11-26 20:53:37.702368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:42.751 [2024-11-26 20:53:37.702376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.751 [2024-11-26 20:53:37.702385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:42.751 [2024-11-26 20:53:37.702394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:42.751 [2024-11-26 20:53:37.702403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.751 [2024-11-26 20:53:37.702412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.751 [2024-11-26 20:53:37.702426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.751 [2024-11-26 20:53:37.702437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.751 [2024-11-26 20:53:37.702446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.751 [2024-11-26 20:53:37.702456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.751 [2024-11-26 20:53:37.702466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.751 [2024-11-26 20:53:37.702475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.751 [2024-11-26 20:53:37.702485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.751 [2024-11-26 20:53:37.702500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.751 [2024-11-26 20:53:37.702510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.751 [2024-11-26 20:53:37.702521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.751 [2024-11-26 20:53:37.702533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:42.751 [2024-11-26 20:53:37.702555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:42.751 [2024-11-26 20:53:37.702566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:42.751 [2024-11-26 20:53:37.702576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:42.751 [2024-11-26 20:53:37.702587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:42.751 [2024-11-26 20:53:37.702597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:42.751 [2024-11-26 20:53:37.702608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:42.751 [2024-11-26 20:53:37.702632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:42.751 [2024-11-26 20:53:37.702643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:42.751 [2024-11-26 20:53:37.702654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:42.751 [2024-11-26 20:53:37.702707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.751 [2024-11-26 20:53:37.702718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.751 [2024-11-26 20:53:37.702744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.751 [2024-11-26 20:53:37.702755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.751 [2024-11-26 20:53:37.702765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.751 [2024-11-26 20:53:37.702776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.751 [2024-11-26 20:53:37.702787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.751 [2024-11-26 20:53:37.702797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:25:42.751 [2024-11-26 20:53:37.702812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.743895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.743948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.011 [2024-11-26 20:53:37.743964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.023 ms 00:25:43.011 [2024-11-26 20:53:37.743980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.744146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.744160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:43.011 [2024-11-26 20:53:37.744172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:43.011 [2024-11-26 20:53:37.744182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.804817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.805006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.011 [2024-11-26 20:53:37.805032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.608 ms 00:25:43.011 [2024-11-26 20:53:37.805044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.805192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.805205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.011 [2024-11-26 20:53:37.805217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:43.011 [2024-11-26 20:53:37.805227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.805694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.805714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.011 [2024-11-26 20:53:37.805733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:43.011 [2024-11-26 20:53:37.805743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.805864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.805878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.011 [2024-11-26 20:53:37.805888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:43.011 [2024-11-26 20:53:37.805898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.827569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.827624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.011 [2024-11-26 20:53:37.827651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.645 ms 00:25:43.011 [2024-11-26 20:53:37.827662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.848358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:43.011 [2024-11-26 20:53:37.848419] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:43.011 [2024-11-26 20:53:37.848437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.848448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:43.011 [2024-11-26 20:53:37.848461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.611 ms 00:25:43.011 [2024-11-26 20:53:37.848471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.879566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.879624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:43.011 [2024-11-26 20:53:37.879646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.996 ms 00:25:43.011 [2024-11-26 20:53:37.879656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.898367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.898407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:43.011 [2024-11-26 20:53:37.898422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.590 ms 00:25:43.011 [2024-11-26 20:53:37.898432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.917056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.917215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:43.011 [2024-11-26 20:53:37.917236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.540 ms 00:25:43.011 [2024-11-26 20:53:37.917248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.011 [2024-11-26 20:53:37.918071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.011 [2024-11-26 20:53:37.918098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:43.011 [2024-11-26 20:53:37.918111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:25:43.011 [2024-11-26 20:53:37.918122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.007851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.007922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:43.270 [2024-11-26 20:53:38.007941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.699 ms 00:25:43.270 [2024-11-26 20:53:38.007953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.019626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:43.270 [2024-11-26 20:53:38.036622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.036684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:43.270 [2024-11-26 20:53:38.036707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.519 ms 00:25:43.270 [2024-11-26 20:53:38.036718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.036873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.036888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:43.270 [2024-11-26 20:53:38.036900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:43.270 [2024-11-26 20:53:38.036911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.036971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.036982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:43.270 [2024-11-26 20:53:38.036998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:43.270 [2024-11-26 20:53:38.037012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.037044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.037057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:43.270 [2024-11-26 20:53:38.037068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:43.270 [2024-11-26 20:53:38.037078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.037118] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:43.270 [2024-11-26 20:53:38.037130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.037140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:43.270 [2024-11-26 20:53:38.037151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:43.270 [2024-11-26 20:53:38.037162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.075454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.075521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:43.270 [2024-11-26 20:53:38.075540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.263 ms 00:25:43.270 [2024-11-26 20:53:38.075552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.075737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.270 [2024-11-26 20:53:38.075754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:43.270 [2024-11-26 20:53:38.075766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:43.270 [2024-11-26 20:53:38.075782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.270 [2024-11-26 20:53:38.076845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:43.270 [2024-11-26 20:53:38.082119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.354 ms, result 0 00:25:43.270 [2024-11-26 20:53:38.083000] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.270 [2024-11-26 20:53:38.102768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.206  [2024-11-26T20:53:40.135Z] Copying: 31/256 [MB] (31 MBps) [2024-11-26T20:53:41.510Z] Copying: 60/256 [MB] (28 MBps) [2024-11-26T20:53:42.444Z] Copying: 88/256 [MB] (28 MBps) [2024-11-26T20:53:43.379Z] Copying: 116/256 [MB] (27 MBps) [2024-11-26T20:53:44.316Z] Copying: 145/256 [MB] (29 MBps) [2024-11-26T20:53:45.249Z] Copying: 174/256 [MB] (29 MBps) [2024-11-26T20:53:46.183Z] Copying: 201/256 [MB] (27 MBps) [2024-11-26T20:53:47.117Z] Copying: 230/256 [MB] (28 MBps) [2024-11-26T20:53:47.117Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-26 20:53:47.003250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:52.123 [2024-11-26 20:53:47.018573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.018753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:52.123 [2024-11-26 20:53:47.018787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:52.123 [2024-11-26 20:53:47.018799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.018831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:52.123 [2024-11-26 20:53:47.023154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.023185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:52.123 [2024-11-26 20:53:47.023199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:25:52.123 [2024-11-26 20:53:47.023209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.023441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.023455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:52.123 [2024-11-26 20:53:47.023467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:52.123 [2024-11-26 20:53:47.023477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.026554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.026690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:52.123 [2024-11-26 20:53:47.026711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:25:52.123 [2024-11-26 20:53:47.026722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.032808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.032844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:52.123 [2024-11-26 20:53:47.032857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:25:52.123 [2024-11-26 20:53:47.032867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.070178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.070340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:52.123 [2024-11-26 20:53:47.070362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.237 ms 00:25:52.123 [2024-11-26 20:53:47.070373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.091995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.092041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:52.123 [2024-11-26 20:53:47.092056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.521 ms 00:25:52.123 [2024-11-26 20:53:47.092067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.123 [2024-11-26 20:53:47.092224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.123 [2024-11-26 20:53:47.092242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:52.123 [2024-11-26 20:53:47.092266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:52.123 [2024-11-26 20:53:47.092276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.383 [2024-11-26 20:53:47.130987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.383 [2024-11-26 20:53:47.131027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:52.383 [2024-11-26 20:53:47.131042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.691 ms 00:25:52.383 [2024-11-26 20:53:47.131053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.383 [2024-11-26 20:53:47.167823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.383 [2024-11-26 20:53:47.167861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:52.383 [2024-11-26 20:53:47.167875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.697 ms 00:25:52.383 [2024-11-26 20:53:47.167885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.383 [2024-11-26 20:53:47.204652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.383 [2024-11-26 20:53:47.204799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:52.383 [2024-11-26 20:53:47.204820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.684 ms 00:25:52.383 [2024-11-26 20:53:47.204831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.383 [2024-11-26 20:53:47.241142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.383 [2024-11-26 20:53:47.241178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:52.383 [2024-11-26 20:53:47.241192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.168 ms 00:25:52.383 [2024-11-26 20:53:47.241217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.383 [2024-11-26 20:53:47.241275] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:52.383 [2024-11-26 20:53:47.241292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:52.383 [2024-11-26 20:53:47.241601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.241993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:52.384 [2024-11-26 20:53:47.242405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:52.384 [2024-11-26 20:53:47.242416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:52.384 [2024-11-26 20:53:47.242427] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:52.384 [2024-11-26 20:53:47.242437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:52.384 [2024-11-26 20:53:47.242452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:52.384 [2024-11-26 20:53:47.242462] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:52.384 [2024-11-26 20:53:47.242471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:52.384 [2024-11-26 20:53:47.242486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:52.384 [2024-11-26 20:53:47.242496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:52.384 [2024-11-26 20:53:47.242505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:52.384 [2024-11-26 20:53:47.242515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:52.384 [2024-11-26 20:53:47.242525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.384 [2024-11-26 20:53:47.242536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:52.384 [2024-11-26 20:53:47.242547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:25:52.384 [2024-11-26 20:53:47.242557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.384 [2024-11-26 20:53:47.262638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.384 [2024-11-26 20:53:47.262685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:52.384 [2024-11-26 20:53:47.262699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.060 ms 00:25:52.384 [2024-11-26 20:53:47.262715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.384 [2024-11-26 20:53:47.263261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.384 [2024-11-26 20:53:47.263279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:52.384 [2024-11-26 20:53:47.263291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:52.384 [2024-11-26 20:53:47.263301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.384 [2024-11-26 20:53:47.318957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.385 [2024-11-26 20:53:47.318993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.385 [2024-11-26 20:53:47.319033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.385 [2024-11-26 20:53:47.319044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.385 [2024-11-26 20:53:47.319153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.385 [2024-11-26 20:53:47.319166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.385 [2024-11-26 20:53:47.319177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.385 [2024-11-26 20:53:47.319187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.385 [2024-11-26 20:53:47.319237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.385 [2024-11-26 20:53:47.319250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.385 [2024-11-26 20:53:47.319261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.385 [2024-11-26 20:53:47.319278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.385 [2024-11-26 20:53:47.319297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.385 [2024-11-26 20:53:47.319308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.385 [2024-11-26 20:53:47.319318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.385 [2024-11-26 20:53:47.319328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.443077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.443334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.644 [2024-11-26 20:53:47.443358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.443381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.546674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.546723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.644 [2024-11-26 20:53:47.546740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.546750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.546849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.546861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.644 [2024-11-26 20:53:47.546873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.546884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.546918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.546930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.644 [2024-11-26 20:53:47.546940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.546950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.547066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.547080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.644 [2024-11-26 20:53:47.547092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.547102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.547141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.547158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.644 [2024-11-26 20:53:47.547169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.547179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.547221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.547233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.644 [2024-11-26 20:53:47.547243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.547253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.547300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.644 [2024-11-26 20:53:47.547312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.644 [2024-11-26 20:53:47.547323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.644 [2024-11-26 20:53:47.547333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.644 [2024-11-26 20:53:47.547495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.900 ms, result 0 00:25:54.021 00:25:54.021 00:25:54.021 20:53:48 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:54.021 20:53:48 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:54.286 20:53:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:54.550 [2024-11-26 20:53:49.286975] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:54.550 [2024-11-26 20:53:49.287427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79374 ] 00:25:54.550 [2024-11-26 20:53:49.482731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.809 [2024-11-26 20:53:49.599425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.068 [2024-11-26 20:53:49.972477] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:55.068 [2024-11-26 20:53:49.972540] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:55.328 [2024-11-26 20:53:50.135384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.135433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:55.328 [2024-11-26 20:53:50.135448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:55.328 [2024-11-26 20:53:50.135458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.138730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.138768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.328 [2024-11-26 20:53:50.138781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:25:55.328 [2024-11-26 20:53:50.138792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.138905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:55.328 [2024-11-26 20:53:50.139921] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:55.328 [2024-11-26 20:53:50.139951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.139963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.328 [2024-11-26 20:53:50.139974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:25:55.328 [2024-11-26 20:53:50.139984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.141490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:55.328 [2024-11-26 20:53:50.160759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.160936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:55.328 [2024-11-26 20:53:50.160959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.271 ms 00:25:55.328 [2024-11-26 20:53:50.160970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.161092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.161111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:55.328 [2024-11-26 20:53:50.161123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:55.328 [2024-11-26 20:53:50.161134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.168067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.168211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.328 [2024-11-26 20:53:50.168231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:25:55.328 [2024-11-26 20:53:50.168242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.168355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.168370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.328 [2024-11-26 20:53:50.168381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:55.328 [2024-11-26 20:53:50.168391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.168425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.168437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:55.328 [2024-11-26 20:53:50.168449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:55.328 [2024-11-26 20:53:50.168459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.168485] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:55.328 [2024-11-26 20:53:50.173299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.173330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.328 [2024-11-26 20:53:50.173343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.821 ms 00:25:55.328 [2024-11-26 20:53:50.173353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.173422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.328 [2024-11-26 20:53:50.173435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:55.328 [2024-11-26 20:53:50.173446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:55.328 [2024-11-26 20:53:50.173456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.328 [2024-11-26 20:53:50.173484] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:55.328 [2024-11-26 20:53:50.173507] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:55.328 [2024-11-26 20:53:50.173544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:55.328 [2024-11-26 20:53:50.173564] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:55.328 [2024-11-26 20:53:50.173693] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:55.328 [2024-11-26 20:53:50.173712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:55.329 [2024-11-26 20:53:50.173725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:55.329 [2024-11-26 20:53:50.173744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:55.329 [2024-11-26 20:53:50.173756] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:55.329 [2024-11-26 20:53:50.173769] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:55.329 [2024-11-26 20:53:50.173779] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:55.329 [2024-11-26 20:53:50.173789] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:55.329 [2024-11-26 20:53:50.173800] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:55.329 [2024-11-26 20:53:50.173811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.329 [2024-11-26 20:53:50.173822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:55.329 [2024-11-26 20:53:50.173833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:25:55.329 [2024-11-26 20:53:50.173843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.329 [2024-11-26 20:53:50.173923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.329 [2024-11-26 20:53:50.173938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:55.329 [2024-11-26 20:53:50.173949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:55.329 [2024-11-26 20:53:50.173959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.329 [2024-11-26 20:53:50.174052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:55.329 [2024-11-26 20:53:50.174065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:55.329 [2024-11-26 20:53:50.174076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:55.329 [2024-11-26 20:53:50.174106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:55.329 [2024-11-26 20:53:50.174135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:55.329 [2024-11-26 20:53:50.174154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:55.329 [2024-11-26 20:53:50.174173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:55.329 [2024-11-26 20:53:50.174182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:55.329 [2024-11-26 20:53:50.174191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:55.329 [2024-11-26 20:53:50.174201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:55.329 [2024-11-26 20:53:50.174217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:55.329 [2024-11-26 20:53:50.174236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:55.329 [2024-11-26 20:53:50.174264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:55.329 [2024-11-26 20:53:50.174292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:55.329 [2024-11-26 20:53:50.174320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:55.329 [2024-11-26 20:53:50.174347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:55.329 [2024-11-26 20:53:50.174374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:55.329 [2024-11-26 20:53:50.174392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:55.329 [2024-11-26 20:53:50.174400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:55.329 [2024-11-26 20:53:50.174409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:55.329 [2024-11-26 20:53:50.174418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:55.329 [2024-11-26 20:53:50.174428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:55.329 [2024-11-26 20:53:50.174436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:55.329 [2024-11-26 20:53:50.174455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:55.329 [2024-11-26 20:53:50.174464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174474] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:55.329 [2024-11-26 20:53:50.174484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:55.329 [2024-11-26 20:53:50.174497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:55.329 [2024-11-26 20:53:50.174517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:55.329 [2024-11-26 20:53:50.174526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:55.329 [2024-11-26 20:53:50.174536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:55.329 [2024-11-26 20:53:50.174545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:55.329 [2024-11-26 20:53:50.174554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:55.329 [2024-11-26 20:53:50.174563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:55.329 [2024-11-26 20:53:50.174574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:55.329 [2024-11-26 20:53:50.174586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:55.329 [2024-11-26 20:53:50.174608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:55.329 [2024-11-26 20:53:50.174629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:55.329 [2024-11-26 20:53:50.174640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:55.329 [2024-11-26 20:53:50.174650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:55.329 [2024-11-26 20:53:50.174664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:55.329 [2024-11-26 20:53:50.174675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:55.329 [2024-11-26 20:53:50.174685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:55.329 [2024-11-26 20:53:50.174695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:55.329 [2024-11-26 20:53:50.174706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:55.329 [2024-11-26 20:53:50.174757] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:55.329 [2024-11-26 20:53:50.174769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:55.329 [2024-11-26 20:53:50.174791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:55.329 [2024-11-26 20:53:50.174802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:55.329 [2024-11-26 20:53:50.174812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:55.329 [2024-11-26 20:53:50.174822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.329 [2024-11-26 20:53:50.174837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:55.329 [2024-11-26 20:53:50.174847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:25:55.329 [2024-11-26 20:53:50.174857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.329 [2024-11-26 20:53:50.214479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.329 [2024-11-26 20:53:50.214735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.329 [2024-11-26 20:53:50.214761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.562 ms 00:25:55.329 [2024-11-26 20:53:50.214773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.329 [2024-11-26 20:53:50.214938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.214951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:55.330 [2024-11-26 20:53:50.214962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:55.330 [2024-11-26 20:53:50.214972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.273319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.273368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.330 [2024-11-26 20:53:50.273387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.321 ms 00:25:55.330 [2024-11-26 20:53:50.273407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.273540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.273553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.330 [2024-11-26 20:53:50.273565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:55.330 [2024-11-26 20:53:50.273576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.274048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.274063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.330 [2024-11-26 20:53:50.274080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:25:55.330 [2024-11-26 20:53:50.274091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.274214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.274227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.330 [2024-11-26 20:53:50.274238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:55.330 [2024-11-26 20:53:50.274248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.294020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.294065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.330 [2024-11-26 20:53:50.294080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.747 ms 00:25:55.330 [2024-11-26 20:53:50.294107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.330 [2024-11-26 20:53:50.313776] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:55.330 [2024-11-26 20:53:50.313816] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:55.330 [2024-11-26 20:53:50.313832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.330 [2024-11-26 20:53:50.313858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:55.330 [2024-11-26 20:53:50.313870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.585 ms 00:25:55.330 [2024-11-26 20:53:50.313880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.344590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.344662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:55.589 [2024-11-26 20:53:50.344678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.616 ms 00:25:55.589 [2024-11-26 20:53:50.344689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.362937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.362974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:55.589 [2024-11-26 20:53:50.362987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.155 ms 00:25:55.589 [2024-11-26 20:53:50.363012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.381227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.381262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:55.589 [2024-11-26 20:53:50.381274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.136 ms 00:25:55.589 [2024-11-26 20:53:50.381299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.382155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.382180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:55.589 [2024-11-26 20:53:50.382192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:25:55.589 [2024-11-26 20:53:50.382202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.469877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.469940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:55.589 [2024-11-26 20:53:50.469973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.643 ms 00:25:55.589 [2024-11-26 20:53:50.469985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.481189] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:55.589 [2024-11-26 20:53:50.497877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.497939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:55.589 [2024-11-26 20:53:50.497956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.763 ms 00:25:55.589 [2024-11-26 20:53:50.497974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.498126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.498141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:55.589 [2024-11-26 20:53:50.498153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:55.589 [2024-11-26 20:53:50.498164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.498222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.498234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:55.589 [2024-11-26 20:53:50.498245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:55.589 [2024-11-26 20:53:50.498260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.498295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.498310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:55.589 [2024-11-26 20:53:50.498324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:55.589 [2024-11-26 20:53:50.498334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.498372] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:55.589 [2024-11-26 20:53:50.498385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.498395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:55.589 [2024-11-26 20:53:50.498404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:55.589 [2024-11-26 20:53:50.498414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.535343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.535506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:55.589 [2024-11-26 20:53:50.535528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.906 ms 00:25:55.589 [2024-11-26 20:53:50.535540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.535728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.589 [2024-11-26 20:53:50.535757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:55.589 [2024-11-26 20:53:50.535770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:55.589 [2024-11-26 20:53:50.535780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.589 [2024-11-26 20:53:50.536771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:55.589 [2024-11-26 20:53:50.541094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.045 ms, result 0 00:25:55.589 [2024-11-26 20:53:50.541958] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:55.589 [2024-11-26 20:53:50.560517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:55.847  [2024-11-26T20:53:50.841Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-11-26 20:53:50.722606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:55.847 [2024-11-26 20:53:50.737377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.847 [2024-11-26 20:53:50.737524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:55.847 [2024-11-26 20:53:50.737553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:55.847 [2024-11-26 20:53:50.737563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.847 [2024-11-26 20:53:50.737594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:55.847 [2024-11-26 20:53:50.742152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.847 [2024-11-26 20:53:50.742180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:55.848 [2024-11-26 20:53:50.742192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.542 ms 00:25:55.848 [2024-11-26 20:53:50.742203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.744146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.744315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:55.848 [2024-11-26 20:53:50.744336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:25:55.848 [2024-11-26 20:53:50.744348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.747719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.747752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:55.848 [2024-11-26 20:53:50.747764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.339 ms 00:25:55.848 [2024-11-26 20:53:50.747775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.753581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.753623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:55.848 [2024-11-26 20:53:50.753636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:25:55.848 [2024-11-26 20:53:50.753647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.790172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.790209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:55.848 [2024-11-26 20:53:50.790222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.476 ms 00:25:55.848 [2024-11-26 20:53:50.790233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.811192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.811234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:55.848 [2024-11-26 20:53:50.811247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.903 ms 00:25:55.848 [2024-11-26 20:53:50.811258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.848 [2024-11-26 20:53:50.811402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.848 [2024-11-26 20:53:50.811416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:55.848 [2024-11-26 20:53:50.811439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:55.848 [2024-11-26 20:53:50.811449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.108 [2024-11-26 20:53:50.848591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.108 [2024-11-26 20:53:50.848657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:56.108 [2024-11-26 20:53:50.848672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.124 ms 00:25:56.108 [2024-11-26 20:53:50.848682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.108 [2024-11-26 20:53:50.885023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.108 [2024-11-26 20:53:50.885060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:56.108 [2024-11-26 20:53:50.885073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.285 ms 00:25:56.108 [2024-11-26 20:53:50.885084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.108 [2024-11-26 20:53:50.921529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.108 [2024-11-26 20:53:50.921563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:56.108 [2024-11-26 20:53:50.921577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.390 ms 00:25:56.108 [2024-11-26 20:53:50.921587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.108 [2024-11-26 20:53:50.958887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.108 [2024-11-26 20:53:50.959047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:56.108 [2024-11-26 20:53:50.959069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.179 ms 00:25:56.108 [2024-11-26 20:53:50.959079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.108 [2024-11-26 20:53:50.959182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:56.108 [2024-11-26 20:53:50.959200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:56.108 [2024-11-26 20:53:50.959354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.959998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:56.109 [2024-11-26 20:53:50.960165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:56.110 [2024-11-26 20:53:50.960322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:56.110 [2024-11-26 20:53:50.960333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:25:56.110 [2024-11-26 20:53:50.960344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:56.110 [2024-11-26 20:53:50.960354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:56.110 [2024-11-26 20:53:50.960364] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:56.110 [2024-11-26 20:53:50.960375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:56.110 [2024-11-26 20:53:50.960384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:56.110 [2024-11-26 20:53:50.960395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:56.110 [2024-11-26 20:53:50.960408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:56.110 [2024-11-26 20:53:50.960418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:56.110 [2024-11-26 20:53:50.960427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:56.110 [2024-11-26 20:53:50.960437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.110 [2024-11-26 20:53:50.960448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:56.110 [2024-11-26 20:53:50.960458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:25:56.110 [2024-11-26 20:53:50.960468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:50.980643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.110 [2024-11-26 20:53:50.980674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:56.110 [2024-11-26 20:53:50.980687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.137 ms 00:25:56.110 [2024-11-26 20:53:50.980698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:50.981312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.110 [2024-11-26 20:53:50.981331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:56.110 [2024-11-26 20:53:50.981343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:25:56.110 [2024-11-26 20:53:50.981353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:51.036471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.110 [2024-11-26 20:53:51.036628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.110 [2024-11-26 20:53:51.036650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.110 [2024-11-26 20:53:51.036667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:51.036760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.110 [2024-11-26 20:53:51.036771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.110 [2024-11-26 20:53:51.036782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.110 [2024-11-26 20:53:51.036792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:51.036847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.110 [2024-11-26 20:53:51.036860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.110 [2024-11-26 20:53:51.036870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.110 [2024-11-26 20:53:51.036881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.110 [2024-11-26 20:53:51.036906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.110 [2024-11-26 20:53:51.036917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.110 [2024-11-26 20:53:51.036938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.110 [2024-11-26 20:53:51.036948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.369 [2024-11-26 20:53:51.166717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.369 [2024-11-26 20:53:51.166770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.369 [2024-11-26 20:53:51.166785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.369 [2024-11-26 20:53:51.166817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.369 [2024-11-26 20:53:51.269837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.369 [2024-11-26 20:53:51.269888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.369 [2024-11-26 20:53:51.269902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.369 [2024-11-26 20:53:51.269914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.369 [2024-11-26 20:53:51.270012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.369 [2024-11-26 20:53:51.270024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.369 [2024-11-26 20:53:51.270036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.369 [2024-11-26 20:53:51.270046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.369 [2024-11-26 20:53:51.270076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.369 [2024-11-26 20:53:51.270093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.369 [2024-11-26 20:53:51.270103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.369 [2024-11-26 20:53:51.270113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.369 [2024-11-26 20:53:51.270223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.369 [2024-11-26 20:53:51.270237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.370 [2024-11-26 20:53:51.270248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.370 [2024-11-26 20:53:51.270258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.370 [2024-11-26 20:53:51.270296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.370 [2024-11-26 20:53:51.270308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.370 [2024-11-26 20:53:51.270323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.370 [2024-11-26 20:53:51.270334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.370 [2024-11-26 20:53:51.270373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.370 [2024-11-26 20:53:51.270385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.370 [2024-11-26 20:53:51.270395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.370 [2024-11-26 20:53:51.270406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.370 [2024-11-26 20:53:51.270449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.370 [2024-11-26 20:53:51.270465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.370 [2024-11-26 20:53:51.270475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.370 [2024-11-26 20:53:51.270485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.370 [2024-11-26 20:53:51.270647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.229 ms, result 0 00:25:57.746 00:25:57.746 00:25:57.746 20:53:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79409 00:25:57.746 20:53:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:57.746 20:53:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79409 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79409 ']' 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.746 20:53:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:57.746 [2024-11-26 20:53:52.472793] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:25:57.746 [2024-11-26 20:53:52.472965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79409 ] 00:25:57.746 [2024-11-26 20:53:52.652043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.005 [2024-11-26 20:53:52.766154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.943 20:53:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.943 20:53:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:58.943 20:53:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:58.943 [2024-11-26 20:53:53.900039] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:58.943 [2024-11-26 20:53:53.900104] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:59.202 [2024-11-26 20:53:54.084643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.084842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:59.202 [2024-11-26 20:53:54.084873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:59.202 [2024-11-26 20:53:54.084884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.088741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.088790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.202 [2024-11-26 20:53:54.088805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.824 ms 00:25:59.202 [2024-11-26 20:53:54.088832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.088942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:59.202 [2024-11-26 20:53:54.089992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:59.202 [2024-11-26 20:53:54.090019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.090030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.202 [2024-11-26 20:53:54.090043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:25:59.202 [2024-11-26 20:53:54.090055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.091599] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:59.202 [2024-11-26 20:53:54.111122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.111167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:59.202 [2024-11-26 20:53:54.111198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.526 ms 00:25:59.202 [2024-11-26 20:53:54.111214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.111317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.111336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:59.202 [2024-11-26 20:53:54.111348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:59.202 [2024-11-26 20:53:54.111362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.118245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.118290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.202 [2024-11-26 20:53:54.118303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.827 ms 00:25:59.202 [2024-11-26 20:53:54.118318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.118456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.118476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.202 [2024-11-26 20:53:54.118487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:59.202 [2024-11-26 20:53:54.118509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.118537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.118553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:59.202 [2024-11-26 20:53:54.118564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:59.202 [2024-11-26 20:53:54.118579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.118607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:59.202 [2024-11-26 20:53:54.123610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.123773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.202 [2024-11-26 20:53:54.123803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.006 ms 00:25:59.202 [2024-11-26 20:53:54.123814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.123900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.123913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:59.202 [2024-11-26 20:53:54.123934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:59.202 [2024-11-26 20:53:54.123945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.123975] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:59.202 [2024-11-26 20:53:54.123999] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:59.202 [2024-11-26 20:53:54.124050] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:59.202 [2024-11-26 20:53:54.124070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:59.202 [2024-11-26 20:53:54.124167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:59.202 [2024-11-26 20:53:54.124180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:59.202 [2024-11-26 20:53:54.124207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:59.202 [2024-11-26 20:53:54.124221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124238] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124249] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:59.202 [2024-11-26 20:53:54.124264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:59.202 [2024-11-26 20:53:54.124275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:59.202 [2024-11-26 20:53:54.124294] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:59.202 [2024-11-26 20:53:54.124305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.124320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:59.202 [2024-11-26 20:53:54.124332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:25:59.202 [2024-11-26 20:53:54.124352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.124429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.202 [2024-11-26 20:53:54.124445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:59.202 [2024-11-26 20:53:54.124456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:59.202 [2024-11-26 20:53:54.124471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.202 [2024-11-26 20:53:54.124562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:59.202 [2024-11-26 20:53:54.124579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:59.202 [2024-11-26 20:53:54.124591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:59.202 [2024-11-26 20:53:54.124650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:59.202 [2024-11-26 20:53:54.124692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.202 [2024-11-26 20:53:54.124717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:59.202 [2024-11-26 20:53:54.124732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:59.202 [2024-11-26 20:53:54.124741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.202 [2024-11-26 20:53:54.124756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:59.202 [2024-11-26 20:53:54.124767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:59.202 [2024-11-26 20:53:54.124782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:59.202 [2024-11-26 20:53:54.124806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:59.202 [2024-11-26 20:53:54.124854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.202 [2024-11-26 20:53:54.124878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:59.202 [2024-11-26 20:53:54.124898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:59.202 [2024-11-26 20:53:54.124907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.203 [2024-11-26 20:53:54.124921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:59.203 [2024-11-26 20:53:54.124931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:59.203 [2024-11-26 20:53:54.124946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.203 [2024-11-26 20:53:54.124956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:59.203 [2024-11-26 20:53:54.124970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:59.203 [2024-11-26 20:53:54.124980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.203 [2024-11-26 20:53:54.124994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:59.203 [2024-11-26 20:53:54.125004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:59.203 [2024-11-26 20:53:54.125019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.203 [2024-11-26 20:53:54.125029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:59.203 [2024-11-26 20:53:54.125044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:59.203 [2024-11-26 20:53:54.125053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.203 [2024-11-26 20:53:54.125067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:59.203 [2024-11-26 20:53:54.125077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:59.203 [2024-11-26 20:53:54.125096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.203 [2024-11-26 20:53:54.125106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:59.203 [2024-11-26 20:53:54.125120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:59.203 [2024-11-26 20:53:54.125130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.203 [2024-11-26 20:53:54.125144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:59.203 [2024-11-26 20:53:54.125159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:59.203 [2024-11-26 20:53:54.125174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.203 [2024-11-26 20:53:54.125184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.203 [2024-11-26 20:53:54.125199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:59.203 [2024-11-26 20:53:54.125209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:59.203 [2024-11-26 20:53:54.125224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:59.203 [2024-11-26 20:53:54.125233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:59.203 [2024-11-26 20:53:54.125247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:59.203 [2024-11-26 20:53:54.125256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:59.203 [2024-11-26 20:53:54.125272] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:59.203 [2024-11-26 20:53:54.125285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:59.203 [2024-11-26 20:53:54.125317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:59.203 [2024-11-26 20:53:54.125333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:59.203 [2024-11-26 20:53:54.125345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:59.203 [2024-11-26 20:53:54.125360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:59.203 [2024-11-26 20:53:54.125371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:59.203 [2024-11-26 20:53:54.125386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:59.203 [2024-11-26 20:53:54.125397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:59.203 [2024-11-26 20:53:54.125412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:59.203 [2024-11-26 20:53:54.125423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:59.203 [2024-11-26 20:53:54.125488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:59.203 [2024-11-26 20:53:54.125500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.203 [2024-11-26 20:53:54.125532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:59.203 [2024-11-26 20:53:54.125547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:59.203 [2024-11-26 20:53:54.125559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:59.203 [2024-11-26 20:53:54.125575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.203 [2024-11-26 20:53:54.125585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:59.203 [2024-11-26 20:53:54.125601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:25:59.203 [2024-11-26 20:53:54.125626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.203 [2024-11-26 20:53:54.167768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.203 [2024-11-26 20:53:54.167818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.203 [2024-11-26 20:53:54.167843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.070 ms 00:25:59.203 [2024-11-26 20:53:54.167862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.203 [2024-11-26 20:53:54.168038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.203 [2024-11-26 20:53:54.168055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:59.203 [2024-11-26 20:53:54.168074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:59.203 [2024-11-26 20:53:54.168087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.462 [2024-11-26 20:53:54.217170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.217377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.463 [2024-11-26 20:53:54.217409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.036 ms 00:25:59.463 [2024-11-26 20:53:54.217422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.217548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.217560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.463 [2024-11-26 20:53:54.217576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.463 [2024-11-26 20:53:54.217587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.218054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.218074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.463 [2024-11-26 20:53:54.218090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:25:59.463 [2024-11-26 20:53:54.218100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.218227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.218241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.463 [2024-11-26 20:53:54.218257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:59.463 [2024-11-26 20:53:54.218267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.240433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.240856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.463 [2024-11-26 20:53:54.240891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.134 ms 00:25:59.463 [2024-11-26 20:53:54.240904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.275917] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:59.463 [2024-11-26 20:53:54.276052] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:59.463 [2024-11-26 20:53:54.276088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.276099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:59.463 [2024-11-26 20:53:54.276117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.029 ms 00:25:59.463 [2024-11-26 20:53:54.276139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.305968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.306006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:59.463 [2024-11-26 20:53:54.306041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.739 ms 00:25:59.463 [2024-11-26 20:53:54.306052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.324894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.324930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:59.463 [2024-11-26 20:53:54.324954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.750 ms 00:25:59.463 [2024-11-26 20:53:54.324964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.343046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.343080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:59.463 [2024-11-26 20:53:54.343114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.970 ms 00:25:59.463 [2024-11-26 20:53:54.343125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.343983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.344008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:59.463 [2024-11-26 20:53:54.344025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:25:59.463 [2024-11-26 20:53:54.344036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.432890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.463 [2024-11-26 20:53:54.432955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:59.463 [2024-11-26 20:53:54.432979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.814 ms 00:25:59.463 [2024-11-26 20:53:54.432991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.463 [2024-11-26 20:53:54.444602] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:59.722 [2024-11-26 20:53:54.462126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.462227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:59.722 [2024-11-26 20:53:54.462260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.981 ms 00:25:59.722 [2024-11-26 20:53:54.462276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.462414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.462435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:59.722 [2024-11-26 20:53:54.462447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:59.722 [2024-11-26 20:53:54.462462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.462519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.462537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:59.722 [2024-11-26 20:53:54.462549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:59.722 [2024-11-26 20:53:54.462570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.462597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.462637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:59.722 [2024-11-26 20:53:54.462649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:59.722 [2024-11-26 20:53:54.462664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.462718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:59.722 [2024-11-26 20:53:54.462742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.462760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:59.722 [2024-11-26 20:53:54.462776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:59.722 [2024-11-26 20:53:54.462791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.503168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.503240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:59.722 [2024-11-26 20:53:54.503263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.325 ms 00:25:59.722 [2024-11-26 20:53:54.503274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.503482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.722 [2024-11-26 20:53:54.503496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:59.722 [2024-11-26 20:53:54.503519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:59.722 [2024-11-26 20:53:54.503529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.722 [2024-11-26 20:53:54.504826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:59.722 [2024-11-26 20:53:54.510447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.729 ms, result 0 00:25:59.722 [2024-11-26 20:53:54.511786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:59.722 Some configs were skipped because the RPC state that can call them passed over. 00:25:59.722 20:53:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:59.981 [2024-11-26 20:53:54.805557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.981 [2024-11-26 20:53:54.805807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:59.981 [2024-11-26 20:53:54.805958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:25:59.981 [2024-11-26 20:53:54.806019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.981 [2024-11-26 20:53:54.806116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.142 ms, result 0 00:25:59.981 true 00:25:59.981 20:53:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:00.240 [2024-11-26 20:53:55.069256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.240 [2024-11-26 20:53:55.069450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:00.240 [2024-11-26 20:53:55.069581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:26:00.240 [2024-11-26 20:53:55.069651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.240 [2024-11-26 20:53:55.069805] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.535 ms, result 0 00:26:00.240 true 00:26:00.240 20:53:55 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79409 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79409 ']' 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79409 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79409 00:26:00.240 killing process with pid 79409 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79409' 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79409 00:26:00.240 20:53:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79409 00:26:01.615 [2024-11-26 20:53:56.245127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.615 [2024-11-26 20:53:56.245181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:01.615 [2024-11-26 20:53:56.245196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:01.615 [2024-11-26 20:53:56.245208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.615 [2024-11-26 20:53:56.245233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:01.615 [2024-11-26 20:53:56.249522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.615 [2024-11-26 20:53:56.249556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:01.615 [2024-11-26 20:53:56.249573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:26:01.615 [2024-11-26 20:53:56.249583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.615 [2024-11-26 20:53:56.249851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.615 [2024-11-26 20:53:56.249865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:01.615 [2024-11-26 20:53:56.249878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:26:01.615 [2024-11-26 20:53:56.249888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.615 [2024-11-26 20:53:56.253293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.615 [2024-11-26 20:53:56.253331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:01.615 [2024-11-26 20:53:56.253346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.381 ms 00:26:01.615 [2024-11-26 20:53:56.253357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.615 [2024-11-26 20:53:56.259046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.615 [2024-11-26 20:53:56.259088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:01.616 [2024-11-26 20:53:56.259103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.648 ms 00:26:01.616 [2024-11-26 20:53:56.259129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.274452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.274628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:01.616 [2024-11-26 20:53:56.274729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.260 ms 00:26:01.616 [2024-11-26 20:53:56.274766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.285859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.285989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:01.616 [2024-11-26 20:53:56.286172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.996 ms 00:26:01.616 [2024-11-26 20:53:56.286211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.286382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.286399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:01.616 [2024-11-26 20:53:56.286413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:01.616 [2024-11-26 20:53:56.286423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.302217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.302248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:01.616 [2024-11-26 20:53:56.302284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.771 ms 00:26:01.616 [2024-11-26 20:53:56.302295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.317530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.317696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:01.616 [2024-11-26 20:53:56.317731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.176 ms 00:26:01.616 [2024-11-26 20:53:56.317741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.332262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.332295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:01.616 [2024-11-26 20:53:56.332330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.429 ms 00:26:01.616 [2024-11-26 20:53:56.332340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.347183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.616 [2024-11-26 20:53:56.347337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:01.616 [2024-11-26 20:53:56.347368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.756 ms 00:26:01.616 [2024-11-26 20:53:56.347378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.616 [2024-11-26 20:53:56.347434] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:01.616 [2024-11-26 20:53:56.347451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.347994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:01.616 [2024-11-26 20:53:56.348294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:01.617 [2024-11-26 20:53:56.348819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:01.617 [2024-11-26 20:53:56.348835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:26:01.617 [2024-11-26 20:53:56.348849] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:01.617 [2024-11-26 20:53:56.348861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:01.617 [2024-11-26 20:53:56.348870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:01.617 [2024-11-26 20:53:56.348888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:01.617 [2024-11-26 20:53:56.348898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:01.617 [2024-11-26 20:53:56.348913] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:01.617 [2024-11-26 20:53:56.348923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:01.617 [2024-11-26 20:53:56.348936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:01.617 [2024-11-26 20:53:56.348946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:01.617 [2024-11-26 20:53:56.348960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.617 [2024-11-26 20:53:56.348970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:01.617 [2024-11-26 20:53:56.348986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:26:01.617 [2024-11-26 20:53:56.349001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.369215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.617 [2024-11-26 20:53:56.369366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:01.617 [2024-11-26 20:53:56.369538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.183 ms 00:26:01.617 [2024-11-26 20:53:56.369577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.370211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.617 [2024-11-26 20:53:56.370315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:01.617 [2024-11-26 20:53:56.370407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:26:01.617 [2024-11-26 20:53:56.370444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.441804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.617 [2024-11-26 20:53:56.441956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:01.617 [2024-11-26 20:53:56.442043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.617 [2024-11-26 20:53:56.442081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.442201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.617 [2024-11-26 20:53:56.442318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:01.617 [2024-11-26 20:53:56.442400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.617 [2024-11-26 20:53:56.442432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.442514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.617 [2024-11-26 20:53:56.442552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:01.617 [2024-11-26 20:53:56.442677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.617 [2024-11-26 20:53:56.442719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.442772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.617 [2024-11-26 20:53:56.442806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:01.617 [2024-11-26 20:53:56.442899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.617 [2024-11-26 20:53:56.442936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.617 [2024-11-26 20:53:56.568845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.617 [2024-11-26 20:53:56.569112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:01.617 [2024-11-26 20:53:56.569238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.617 [2024-11-26 20:53:56.569278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.876 [2024-11-26 20:53:56.671351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.876 [2024-11-26 20:53:56.671568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.876 [2024-11-26 20:53:56.671698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.876 [2024-11-26 20:53:56.671740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.876 [2024-11-26 20:53:56.671898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.876 [2024-11-26 20:53:56.671984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:01.876 [2024-11-26 20:53:56.672033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.876 [2024-11-26 20:53:56.672066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.876 [2024-11-26 20:53:56.672226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.876 [2024-11-26 20:53:56.672265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:01.876 [2024-11-26 20:53:56.672353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.876 [2024-11-26 20:53:56.672390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.876 [2024-11-26 20:53:56.672553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.876 [2024-11-26 20:53:56.672678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:01.876 [2024-11-26 20:53:56.672788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.876 [2024-11-26 20:53:56.672825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.876 [2024-11-26 20:53:56.672907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.876 [2024-11-26 20:53:56.672996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:01.877 [2024-11-26 20:53:56.673040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.877 [2024-11-26 20:53:56.673072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.877 [2024-11-26 20:53:56.673188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.877 [2024-11-26 20:53:56.673277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:01.877 [2024-11-26 20:53:56.673354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.877 [2024-11-26 20:53:56.673390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.877 [2024-11-26 20:53:56.673471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:01.877 [2024-11-26 20:53:56.673525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:01.877 [2024-11-26 20:53:56.673600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:01.877 [2024-11-26 20:53:56.673645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.877 [2024-11-26 20:53:56.673832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.673 ms, result 0 00:26:02.812 20:53:57 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:03.071 [2024-11-26 20:53:57.821511] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:03.071 [2024-11-26 20:53:57.821711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79474 ] 00:26:03.071 [2024-11-26 20:53:58.004545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.329 [2024-11-26 20:53:58.121416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.587 [2024-11-26 20:53:58.491451] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.587 [2024-11-26 20:53:58.491513] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.847 [2024-11-26 20:53:58.654711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.654769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:03.847 [2024-11-26 20:53:58.654785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:03.847 [2024-11-26 20:53:58.654796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.658003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.658177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.847 [2024-11-26 20:53:58.658201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:26:03.847 [2024-11-26 20:53:58.658212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.658412] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:03.847 [2024-11-26 20:53:58.659491] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:03.847 [2024-11-26 20:53:58.659519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.659531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.847 [2024-11-26 20:53:58.659542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:26:03.847 [2024-11-26 20:53:58.659553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.661228] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:03.847 [2024-11-26 20:53:58.682540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.682627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:03.847 [2024-11-26 20:53:58.682646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.309 ms 00:26:03.847 [2024-11-26 20:53:58.682657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.682859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.682875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:03.847 [2024-11-26 20:53:58.682888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:03.847 [2024-11-26 20:53:58.682898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.690655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.690699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.847 [2024-11-26 20:53:58.690713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.704 ms 00:26:03.847 [2024-11-26 20:53:58.690740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.690874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.690894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.847 [2024-11-26 20:53:58.690906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:03.847 [2024-11-26 20:53:58.690916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.690956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.690968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:03.847 [2024-11-26 20:53:58.690980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:03.847 [2024-11-26 20:53:58.690990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.691020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:03.847 [2024-11-26 20:53:58.696200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.696248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.847 [2024-11-26 20:53:58.696263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.189 ms 00:26:03.847 [2024-11-26 20:53:58.696273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.696382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.696395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:03.847 [2024-11-26 20:53:58.696406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:03.847 [2024-11-26 20:53:58.696417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.696447] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:03.847 [2024-11-26 20:53:58.696474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:03.847 [2024-11-26 20:53:58.696513] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:03.847 [2024-11-26 20:53:58.696533] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:03.847 [2024-11-26 20:53:58.696648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:03.847 [2024-11-26 20:53:58.696664] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:03.847 [2024-11-26 20:53:58.696677] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:03.847 [2024-11-26 20:53:58.696696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:03.847 [2024-11-26 20:53:58.696708] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:03.847 [2024-11-26 20:53:58.696720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:03.847 [2024-11-26 20:53:58.696731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:03.847 [2024-11-26 20:53:58.696741] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:03.847 [2024-11-26 20:53:58.696751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:03.847 [2024-11-26 20:53:58.696763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.696774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:03.847 [2024-11-26 20:53:58.696785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:26:03.847 [2024-11-26 20:53:58.696795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.696876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.847 [2024-11-26 20:53:58.696892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:03.847 [2024-11-26 20:53:58.696903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:03.847 [2024-11-26 20:53:58.696913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.847 [2024-11-26 20:53:58.697011] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:03.847 [2024-11-26 20:53:58.697024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:03.847 [2024-11-26 20:53:58.697035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:03.847 [2024-11-26 20:53:58.697068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:03.847 [2024-11-26 20:53:58.697099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.847 [2024-11-26 20:53:58.697118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:03.847 [2024-11-26 20:53:58.697142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:03.847 [2024-11-26 20:53:58.697152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.847 [2024-11-26 20:53:58.697162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:03.847 [2024-11-26 20:53:58.697174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:03.847 [2024-11-26 20:53:58.697184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:03.847 [2024-11-26 20:53:58.697203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:03.847 [2024-11-26 20:53:58.697233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:03.847 [2024-11-26 20:53:58.697261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:03.847 [2024-11-26 20:53:58.697290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:03.847 [2024-11-26 20:53:58.697318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:03.847 [2024-11-26 20:53:58.697346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.847 [2024-11-26 20:53:58.697365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:03.847 [2024-11-26 20:53:58.697375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:03.847 [2024-11-26 20:53:58.697384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.847 [2024-11-26 20:53:58.697393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:03.847 [2024-11-26 20:53:58.697402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:03.847 [2024-11-26 20:53:58.697411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:03.847 [2024-11-26 20:53:58.697429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:03.847 [2024-11-26 20:53:58.697440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:03.847 [2024-11-26 20:53:58.697460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:03.847 [2024-11-26 20:53:58.697473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.847 [2024-11-26 20:53:58.697484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.847 [2024-11-26 20:53:58.697494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:03.848 [2024-11-26 20:53:58.697504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:03.848 [2024-11-26 20:53:58.697514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:03.848 [2024-11-26 20:53:58.697524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:03.848 [2024-11-26 20:53:58.697533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:03.848 [2024-11-26 20:53:58.697542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:03.848 [2024-11-26 20:53:58.697553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:03.848 [2024-11-26 20:53:58.697566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:03.848 [2024-11-26 20:53:58.697589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:03.848 [2024-11-26 20:53:58.697599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:03.848 [2024-11-26 20:53:58.697609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:03.848 [2024-11-26 20:53:58.697631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:03.848 [2024-11-26 20:53:58.697642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:03.848 [2024-11-26 20:53:58.697653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:03.848 [2024-11-26 20:53:58.697663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:03.848 [2024-11-26 20:53:58.697674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:03.848 [2024-11-26 20:53:58.697685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:03.848 [2024-11-26 20:53:58.697738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:03.848 [2024-11-26 20:53:58.697750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:03.848 [2024-11-26 20:53:58.697773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:03.848 [2024-11-26 20:53:58.697783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:03.848 [2024-11-26 20:53:58.697794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:03.848 [2024-11-26 20:53:58.697805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.697822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:03.848 [2024-11-26 20:53:58.697833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:26:03.848 [2024-11-26 20:53:58.697843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.738935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.739227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.848 [2024-11-26 20:53:58.739329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.024 ms 00:26:03.848 [2024-11-26 20:53:58.739368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.739588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.739748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.848 [2024-11-26 20:53:58.739839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:03.848 [2024-11-26 20:53:58.739875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.796989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.797219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.848 [2024-11-26 20:53:58.797311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.057 ms 00:26:03.848 [2024-11-26 20:53:58.797348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.797509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.797610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.848 [2024-11-26 20:53:58.797663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.848 [2024-11-26 20:53:58.797694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.798224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.798323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.848 [2024-11-26 20:53:58.798404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:26:03.848 [2024-11-26 20:53:58.798438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.798595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.798676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.848 [2024-11-26 20:53:58.798731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:26:03.848 [2024-11-26 20:53:58.798762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.848 [2024-11-26 20:53:58.819062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.848 [2024-11-26 20:53:58.819221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.848 [2024-11-26 20:53:58.819296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.251 ms 00:26:03.848 [2024-11-26 20:53:58.819332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.839282] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:04.111 [2024-11-26 20:53:58.839436] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:04.111 [2024-11-26 20:53:58.839536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.839569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:04.111 [2024-11-26 20:53:58.839601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.057 ms 00:26:04.111 [2024-11-26 20:53:58.839656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.869874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.870036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:04.111 [2024-11-26 20:53:58.870150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.093 ms 00:26:04.111 [2024-11-26 20:53:58.870187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.889016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.889155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:04.111 [2024-11-26 20:53:58.889272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.726 ms 00:26:04.111 [2024-11-26 20:53:58.889309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.907925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.908051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:04.111 [2024-11-26 20:53:58.908171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:26:04.111 [2024-11-26 20:53:58.908208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.909023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.909139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:04.111 [2024-11-26 20:53:58.909209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:26:04.111 [2024-11-26 20:53:58.909244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:58.997950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:58.998192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:04.111 [2024-11-26 20:53:58.998270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.651 ms 00:26:04.111 [2024-11-26 20:53:58.998307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.009622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:04.111 [2024-11-26 20:53:59.026177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.026422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:04.111 [2024-11-26 20:53:59.026448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.720 ms 00:26:04.111 [2024-11-26 20:53:59.026468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.026637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.026654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:04.111 [2024-11-26 20:53:59.026665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:04.111 [2024-11-26 20:53:59.026675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.026733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.026746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:04.111 [2024-11-26 20:53:59.026757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:04.111 [2024-11-26 20:53:59.026771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.026805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.026819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:04.111 [2024-11-26 20:53:59.026829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:04.111 [2024-11-26 20:53:59.026839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.026878] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:04.111 [2024-11-26 20:53:59.026891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.026902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:04.111 [2024-11-26 20:53:59.026912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:04.111 [2024-11-26 20:53:59.026922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.064047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.064103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:04.111 [2024-11-26 20:53:59.064117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.103 ms 00:26:04.111 [2024-11-26 20:53:59.064128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.064250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.111 [2024-11-26 20:53:59.064264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:04.111 [2024-11-26 20:53:59.064275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:04.111 [2024-11-26 20:53:59.064286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.111 [2024-11-26 20:53:59.065235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:04.111 [2024-11-26 20:53:59.069506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.200 ms, result 0 00:26:04.111 [2024-11-26 20:53:59.070406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:04.111 [2024-11-26 20:53:59.089129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.497  [2024-11-26T20:54:01.445Z] Copying: 31/256 [MB] (31 MBps) [2024-11-26T20:54:02.381Z] Copying: 57/256 [MB] (26 MBps) [2024-11-26T20:54:03.319Z] Copying: 86/256 [MB] (28 MBps) [2024-11-26T20:54:04.256Z] Copying: 114/256 [MB] (28 MBps) [2024-11-26T20:54:05.193Z] Copying: 143/256 [MB] (28 MBps) [2024-11-26T20:54:06.568Z] Copying: 170/256 [MB] (27 MBps) [2024-11-26T20:54:07.503Z] Copying: 198/256 [MB] (27 MBps) [2024-11-26T20:54:08.438Z] Copying: 226/256 [MB] (28 MBps) [2024-11-26T20:54:08.438Z] Copying: 254/256 [MB] (27 MBps) [2024-11-26T20:54:08.696Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-26 20:54:08.660215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.702 [2024-11-26 20:54:08.676517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.702 [2024-11-26 20:54:08.676564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.702 [2024-11-26 20:54:08.676589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:13.702 [2024-11-26 20:54:08.676600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.702 [2024-11-26 20:54:08.676639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:13.702 [2024-11-26 20:54:08.681024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.702 [2024-11-26 20:54:08.681051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.702 [2024-11-26 20:54:08.681064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:26:13.702 [2024-11-26 20:54:08.681090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.702 [2024-11-26 20:54:08.681338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.702 [2024-11-26 20:54:08.681352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.702 [2024-11-26 20:54:08.681363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:26:13.702 [2024-11-26 20:54:08.681373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.702 [2024-11-26 20:54:08.684668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.702 [2024-11-26 20:54:08.684693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.702 [2024-11-26 20:54:08.684704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:26:13.702 [2024-11-26 20:54:08.684714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.702 [2024-11-26 20:54:08.690836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.702 [2024-11-26 20:54:08.690870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.702 [2024-11-26 20:54:08.690882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:26:13.702 [2024-11-26 20:54:08.690893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.961 [2024-11-26 20:54:08.728006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.961 [2024-11-26 20:54:08.728043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.961 [2024-11-26 20:54:08.728057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.038 ms 00:26:13.961 [2024-11-26 20:54:08.728083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.961 [2024-11-26 20:54:08.748771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.748808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:13.962 [2024-11-26 20:54:08.748827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.624 ms 00:26:13.962 [2024-11-26 20:54:08.748838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.748975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.748989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:13.962 [2024-11-26 20:54:08.749022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:13.962 [2024-11-26 20:54:08.749031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.785362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.785395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:13.962 [2024-11-26 20:54:08.785409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.312 ms 00:26:13.962 [2024-11-26 20:54:08.785435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.821522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.821555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:13.962 [2024-11-26 20:54:08.821568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.030 ms 00:26:13.962 [2024-11-26 20:54:08.821577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.856859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.857014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:13.962 [2024-11-26 20:54:08.857035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.220 ms 00:26:13.962 [2024-11-26 20:54:08.857045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.892369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.962 [2024-11-26 20:54:08.892519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:13.962 [2024-11-26 20:54:08.892540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.244 ms 00:26:13.962 [2024-11-26 20:54:08.892550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.962 [2024-11-26 20:54:08.892638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.962 [2024-11-26 20:54:08.892658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.892991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.962 [2024-11-26 20:54:08.893171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.963 [2024-11-26 20:54:08.893752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.963 [2024-11-26 20:54:08.893762] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 532d7671-2fbd-470b-89fe-c9097b3f6a68 00:26:13.963 [2024-11-26 20:54:08.893784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:13.963 [2024-11-26 20:54:08.893793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:13.963 [2024-11-26 20:54:08.893804] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:13.963 [2024-11-26 20:54:08.893814] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:13.963 [2024-11-26 20:54:08.893823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.963 [2024-11-26 20:54:08.893833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.963 [2024-11-26 20:54:08.893847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.963 [2024-11-26 20:54:08.893856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.963 [2024-11-26 20:54:08.893864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.963 [2024-11-26 20:54:08.893874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.963 [2024-11-26 20:54:08.893885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.963 [2024-11-26 20:54:08.893896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:26:13.963 [2024-11-26 20:54:08.893905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.963 [2024-11-26 20:54:08.913923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.963 [2024-11-26 20:54:08.913956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.963 [2024-11-26 20:54:08.913968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.997 ms 00:26:13.963 [2024-11-26 20:54:08.913978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.963 [2024-11-26 20:54:08.914511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.963 [2024-11-26 20:54:08.914526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.963 [2024-11-26 20:54:08.914536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:26:13.964 [2024-11-26 20:54:08.914547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.222 [2024-11-26 20:54:08.970011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.222 [2024-11-26 20:54:08.970046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.222 [2024-11-26 20:54:08.970059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.222 [2024-11-26 20:54:08.970074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.222 [2024-11-26 20:54:08.970151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:08.970162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.223 [2024-11-26 20:54:08.970172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:08.970182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:08.970232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:08.970245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.223 [2024-11-26 20:54:08.970255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:08.970265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:08.970287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:08.970297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.223 [2024-11-26 20:54:08.970306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:08.970315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.092039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.092298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.223 [2024-11-26 20:54:09.092322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.092334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.223 [2024-11-26 20:54:09.192091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.223 [2024-11-26 20:54:09.192204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.223 [2024-11-26 20:54:09.192269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.223 [2024-11-26 20:54:09.192430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.223 [2024-11-26 20:54:09.192530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.223 [2024-11-26 20:54:09.192601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.223 [2024-11-26 20:54:09.192706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.223 [2024-11-26 20:54:09.192716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.223 [2024-11-26 20:54:09.192726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.223 [2024-11-26 20:54:09.192881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.369 ms, result 0 00:26:15.597 00:26:15.597 00:26:15.597 20:54:10 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.855 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:15.855 20:54:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:15.855 20:54:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:15.855 20:54:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.855 20:54:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:15.855 20:54:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:16.114 20:54:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:16.114 20:54:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79409 00:26:16.114 20:54:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79409 ']' 00:26:16.114 20:54:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79409 00:26:16.114 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79409) - No such process 00:26:16.114 Process with pid 79409 is not found 00:26:16.114 20:54:10 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79409 is not found' 00:26:16.114 ************************************ 00:26:16.114 END TEST ftl_trim 00:26:16.114 ************************************ 00:26:16.114 00:26:16.114 real 1m9.158s 00:26:16.114 user 1m37.049s 00:26:16.114 sys 0m7.154s 00:26:16.114 20:54:10 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.114 20:54:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:16.114 20:54:11 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:16.114 20:54:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:16.114 20:54:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.114 20:54:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:16.114 ************************************ 00:26:16.114 START TEST ftl_restore 00:26:16.114 ************************************ 00:26:16.114 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:16.374 * Looking for test storage... 00:26:16.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.374 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:16.374 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:26:16.374 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:16.374 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:16.374 20:54:11 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.375 20:54:11 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.375 --rc genhtml_branch_coverage=1 00:26:16.375 --rc genhtml_function_coverage=1 00:26:16.375 --rc genhtml_legend=1 00:26:16.375 --rc geninfo_all_blocks=1 00:26:16.375 --rc geninfo_unexecuted_blocks=1 00:26:16.375 00:26:16.375 ' 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.375 --rc genhtml_branch_coverage=1 00:26:16.375 --rc genhtml_function_coverage=1 00:26:16.375 --rc genhtml_legend=1 00:26:16.375 --rc geninfo_all_blocks=1 00:26:16.375 --rc geninfo_unexecuted_blocks=1 00:26:16.375 00:26:16.375 ' 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.375 --rc genhtml_branch_coverage=1 00:26:16.375 --rc genhtml_function_coverage=1 00:26:16.375 --rc genhtml_legend=1 00:26:16.375 --rc geninfo_all_blocks=1 00:26:16.375 --rc geninfo_unexecuted_blocks=1 00:26:16.375 00:26:16.375 ' 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:16.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.375 --rc genhtml_branch_coverage=1 00:26:16.375 --rc genhtml_function_coverage=1 00:26:16.375 --rc genhtml_legend=1 00:26:16.375 --rc geninfo_all_blocks=1 00:26:16.375 --rc geninfo_unexecuted_blocks=1 00:26:16.375 00:26:16.375 ' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HWlhBlSplG 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79673 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79673 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79673 ']' 00:26:16.375 20:54:11 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.375 20:54:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:16.635 [2024-11-26 20:54:11.393423] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:16.635 [2024-11-26 20:54:11.393921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79673 ] 00:26:16.635 [2024-11-26 20:54:11.598778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.894 [2024-11-26 20:54:11.775469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.831 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.831 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:17.831 20:54:12 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:18.092 20:54:12 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:18.092 20:54:12 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:18.092 20:54:12 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:18.092 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:18.092 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:18.092 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:18.092 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:18.092 20:54:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:18.358 { 00:26:18.358 "name": "nvme0n1", 00:26:18.358 "aliases": [ 00:26:18.358 "c18fefbd-3236-43fe-96fc-b62b39fb997b" 00:26:18.358 ], 00:26:18.358 "product_name": "NVMe disk", 00:26:18.358 "block_size": 4096, 00:26:18.358 "num_blocks": 1310720, 00:26:18.358 "uuid": "c18fefbd-3236-43fe-96fc-b62b39fb997b", 00:26:18.358 "numa_id": -1, 00:26:18.358 "assigned_rate_limits": { 00:26:18.358 "rw_ios_per_sec": 0, 00:26:18.358 "rw_mbytes_per_sec": 0, 00:26:18.358 "r_mbytes_per_sec": 0, 00:26:18.358 "w_mbytes_per_sec": 0 00:26:18.358 }, 00:26:18.358 "claimed": true, 00:26:18.358 "claim_type": "read_many_write_one", 00:26:18.358 "zoned": false, 00:26:18.358 "supported_io_types": { 00:26:18.358 "read": true, 00:26:18.358 "write": true, 00:26:18.358 "unmap": true, 00:26:18.358 "flush": true, 00:26:18.358 "reset": true, 00:26:18.358 "nvme_admin": true, 00:26:18.358 "nvme_io": true, 00:26:18.358 "nvme_io_md": false, 00:26:18.358 "write_zeroes": true, 00:26:18.358 "zcopy": false, 00:26:18.358 "get_zone_info": false, 00:26:18.358 "zone_management": false, 00:26:18.358 "zone_append": false, 00:26:18.358 "compare": true, 00:26:18.358 "compare_and_write": false, 00:26:18.358 "abort": true, 00:26:18.358 "seek_hole": false, 00:26:18.358 "seek_data": false, 00:26:18.358 "copy": true, 00:26:18.358 "nvme_iov_md": false 00:26:18.358 }, 00:26:18.358 "driver_specific": { 00:26:18.358 "nvme": [ 00:26:18.358 { 00:26:18.358 "pci_address": "0000:00:11.0", 00:26:18.358 "trid": { 00:26:18.358 "trtype": "PCIe", 00:26:18.358 "traddr": "0000:00:11.0" 00:26:18.358 }, 00:26:18.358 "ctrlr_data": { 00:26:18.358 "cntlid": 0, 00:26:18.358 "vendor_id": "0x1b36", 00:26:18.358 "model_number": "QEMU NVMe Ctrl", 00:26:18.358 "serial_number": "12341", 00:26:18.358 "firmware_revision": "8.0.0", 00:26:18.358 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:18.358 "oacs": { 00:26:18.358 "security": 0, 00:26:18.358 "format": 1, 00:26:18.358 "firmware": 0, 00:26:18.358 "ns_manage": 1 00:26:18.358 }, 00:26:18.358 "multi_ctrlr": false, 00:26:18.358 "ana_reporting": false 00:26:18.358 }, 00:26:18.358 "vs": { 00:26:18.358 "nvme_version": "1.4" 00:26:18.358 }, 00:26:18.358 "ns_data": { 00:26:18.358 "id": 1, 00:26:18.358 "can_share": false 00:26:18.358 } 00:26:18.358 } 00:26:18.358 ], 00:26:18.358 "mp_policy": "active_passive" 00:26:18.358 } 00:26:18.358 } 00:26:18.358 ]' 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:18.358 20:54:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:26:18.358 20:54:13 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:18.358 20:54:13 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:18.358 20:54:13 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:18.358 20:54:13 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:18.358 20:54:13 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:18.635 20:54:13 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b06cebfa-4642-4455-80e8-277a4d55ad2f 00:26:18.635 20:54:13 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:18.635 20:54:13 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b06cebfa-4642-4455-80e8-277a4d55ad2f 00:26:18.894 20:54:13 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:19.184 20:54:14 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=703ee9f7-b9ba-4d26-93c5-5450400f5d1f 00:26:19.184 20:54:14 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 703ee9f7-b9ba-4d26-93c5-5450400f5d1f 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:19.442 20:54:14 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.442 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.442 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:19.442 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:19.442 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:19.443 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.443 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:19.443 { 00:26:19.443 "name": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:19.443 "aliases": [ 00:26:19.443 "lvs/nvme0n1p0" 00:26:19.443 ], 00:26:19.443 "product_name": "Logical Volume", 00:26:19.443 "block_size": 4096, 00:26:19.443 "num_blocks": 26476544, 00:26:19.443 "uuid": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:19.443 "assigned_rate_limits": { 00:26:19.443 "rw_ios_per_sec": 0, 00:26:19.443 "rw_mbytes_per_sec": 0, 00:26:19.443 "r_mbytes_per_sec": 0, 00:26:19.443 "w_mbytes_per_sec": 0 00:26:19.443 }, 00:26:19.443 "claimed": false, 00:26:19.443 "zoned": false, 00:26:19.443 "supported_io_types": { 00:26:19.443 "read": true, 00:26:19.443 "write": true, 00:26:19.443 "unmap": true, 00:26:19.443 "flush": false, 00:26:19.443 "reset": true, 00:26:19.443 "nvme_admin": false, 00:26:19.443 "nvme_io": false, 00:26:19.443 "nvme_io_md": false, 00:26:19.443 "write_zeroes": true, 00:26:19.443 "zcopy": false, 00:26:19.443 "get_zone_info": false, 00:26:19.443 "zone_management": false, 00:26:19.443 "zone_append": false, 00:26:19.443 "compare": false, 00:26:19.443 "compare_and_write": false, 00:26:19.443 "abort": false, 00:26:19.443 "seek_hole": true, 00:26:19.443 "seek_data": true, 00:26:19.443 "copy": false, 00:26:19.443 "nvme_iov_md": false 00:26:19.443 }, 00:26:19.443 "driver_specific": { 00:26:19.443 "lvol": { 00:26:19.443 "lvol_store_uuid": "703ee9f7-b9ba-4d26-93c5-5450400f5d1f", 00:26:19.443 "base_bdev": "nvme0n1", 00:26:19.443 "thin_provision": true, 00:26:19.443 "num_allocated_clusters": 0, 00:26:19.443 "snapshot": false, 00:26:19.443 "clone": false, 00:26:19.443 "esnap_clone": false 00:26:19.443 } 00:26:19.443 } 00:26:19.443 } 00:26:19.443 ]' 00:26:19.443 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:19.701 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:19.701 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:19.701 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:19.701 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:19.701 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:19.701 20:54:14 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:19.701 20:54:14 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:19.701 20:54:14 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:19.961 20:54:14 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:19.961 20:54:14 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:19.961 20:54:14 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.961 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=541001df-0554-4c79-8de3-d19e2858f4e1 00:26:19.961 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:19.961 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:19.961 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:19.961 20:54:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:20.220 { 00:26:20.220 "name": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:20.220 "aliases": [ 00:26:20.220 "lvs/nvme0n1p0" 00:26:20.220 ], 00:26:20.220 "product_name": "Logical Volume", 00:26:20.220 "block_size": 4096, 00:26:20.220 "num_blocks": 26476544, 00:26:20.220 "uuid": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:20.220 "assigned_rate_limits": { 00:26:20.220 "rw_ios_per_sec": 0, 00:26:20.220 "rw_mbytes_per_sec": 0, 00:26:20.220 "r_mbytes_per_sec": 0, 00:26:20.220 "w_mbytes_per_sec": 0 00:26:20.220 }, 00:26:20.220 "claimed": false, 00:26:20.220 "zoned": false, 00:26:20.220 "supported_io_types": { 00:26:20.220 "read": true, 00:26:20.220 "write": true, 00:26:20.220 "unmap": true, 00:26:20.220 "flush": false, 00:26:20.220 "reset": true, 00:26:20.220 "nvme_admin": false, 00:26:20.220 "nvme_io": false, 00:26:20.220 "nvme_io_md": false, 00:26:20.220 "write_zeroes": true, 00:26:20.220 "zcopy": false, 00:26:20.220 "get_zone_info": false, 00:26:20.220 "zone_management": false, 00:26:20.220 "zone_append": false, 00:26:20.220 "compare": false, 00:26:20.220 "compare_and_write": false, 00:26:20.220 "abort": false, 00:26:20.220 "seek_hole": true, 00:26:20.220 "seek_data": true, 00:26:20.220 "copy": false, 00:26:20.220 "nvme_iov_md": false 00:26:20.220 }, 00:26:20.220 "driver_specific": { 00:26:20.220 "lvol": { 00:26:20.220 "lvol_store_uuid": "703ee9f7-b9ba-4d26-93c5-5450400f5d1f", 00:26:20.220 "base_bdev": "nvme0n1", 00:26:20.220 "thin_provision": true, 00:26:20.220 "num_allocated_clusters": 0, 00:26:20.220 "snapshot": false, 00:26:20.220 "clone": false, 00:26:20.220 "esnap_clone": false 00:26:20.220 } 00:26:20.220 } 00:26:20.220 } 00:26:20.220 ]' 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:20.220 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:20.220 20:54:15 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:20.220 20:54:15 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:20.479 20:54:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:20.479 20:54:15 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:20.479 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=541001df-0554-4c79-8de3-d19e2858f4e1 00:26:20.479 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:20.479 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:20.479 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:20.479 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 541001df-0554-4c79-8de3-d19e2858f4e1 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:20.737 { 00:26:20.737 "name": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:20.737 "aliases": [ 00:26:20.737 "lvs/nvme0n1p0" 00:26:20.737 ], 00:26:20.737 "product_name": "Logical Volume", 00:26:20.737 "block_size": 4096, 00:26:20.737 "num_blocks": 26476544, 00:26:20.737 "uuid": "541001df-0554-4c79-8de3-d19e2858f4e1", 00:26:20.737 "assigned_rate_limits": { 00:26:20.737 "rw_ios_per_sec": 0, 00:26:20.737 "rw_mbytes_per_sec": 0, 00:26:20.737 "r_mbytes_per_sec": 0, 00:26:20.737 "w_mbytes_per_sec": 0 00:26:20.737 }, 00:26:20.737 "claimed": false, 00:26:20.737 "zoned": false, 00:26:20.737 "supported_io_types": { 00:26:20.737 "read": true, 00:26:20.737 "write": true, 00:26:20.737 "unmap": true, 00:26:20.737 "flush": false, 00:26:20.737 "reset": true, 00:26:20.737 "nvme_admin": false, 00:26:20.737 "nvme_io": false, 00:26:20.737 "nvme_io_md": false, 00:26:20.737 "write_zeroes": true, 00:26:20.737 "zcopy": false, 00:26:20.737 "get_zone_info": false, 00:26:20.737 "zone_management": false, 00:26:20.737 "zone_append": false, 00:26:20.737 "compare": false, 00:26:20.737 "compare_and_write": false, 00:26:20.737 "abort": false, 00:26:20.737 "seek_hole": true, 00:26:20.737 "seek_data": true, 00:26:20.737 "copy": false, 00:26:20.737 "nvme_iov_md": false 00:26:20.737 }, 00:26:20.737 "driver_specific": { 00:26:20.737 "lvol": { 00:26:20.737 "lvol_store_uuid": "703ee9f7-b9ba-4d26-93c5-5450400f5d1f", 00:26:20.737 "base_bdev": "nvme0n1", 00:26:20.737 "thin_provision": true, 00:26:20.737 "num_allocated_clusters": 0, 00:26:20.737 "snapshot": false, 00:26:20.737 "clone": false, 00:26:20.737 "esnap_clone": false 00:26:20.737 } 00:26:20.737 } 00:26:20.737 } 00:26:20.737 ]' 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:20.737 20:54:15 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 541001df-0554-4c79-8de3-d19e2858f4e1 --l2p_dram_limit 10' 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:20.737 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:20.737 20:54:15 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 541001df-0554-4c79-8de3-d19e2858f4e1 --l2p_dram_limit 10 -c nvc0n1p0 00:26:20.997 [2024-11-26 20:54:15.936850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.997 [2024-11-26 20:54:15.936903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:20.997 [2024-11-26 20:54:15.936922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:20.997 [2024-11-26 20:54:15.936950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.997 [2024-11-26 20:54:15.937027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.997 [2024-11-26 20:54:15.937040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:20.997 [2024-11-26 20:54:15.937054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:20.997 [2024-11-26 20:54:15.937065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.997 [2024-11-26 20:54:15.937090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:20.997 [2024-11-26 20:54:15.938120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:20.997 [2024-11-26 20:54:15.938155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.997 [2024-11-26 20:54:15.938166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:20.997 [2024-11-26 20:54:15.938180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:26:20.997 [2024-11-26 20:54:15.938190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.997 [2024-11-26 20:54:15.938237] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:26:20.997 [2024-11-26 20:54:15.939654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.997 [2024-11-26 20:54:15.939681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:20.997 [2024-11-26 20:54:15.939694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:20.998 [2024-11-26 20:54:15.939709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.947264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.947425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:20.998 [2024-11-26 20:54:15.947505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.476 ms 00:26:20.998 [2024-11-26 20:54:15.947547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.947700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.947787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:20.998 [2024-11-26 20:54:15.947873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:20.998 [2024-11-26 20:54:15.947911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.948008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.948047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:20.998 [2024-11-26 20:54:15.948143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:20.998 [2024-11-26 20:54:15.948227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.948274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:20.998 [2024-11-26 20:54:15.954037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.954178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:20.998 [2024-11-26 20:54:15.954311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.767 ms 00:26:20.998 [2024-11-26 20:54:15.954349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.954412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.954446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:20.998 [2024-11-26 20:54:15.954531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:20.998 [2024-11-26 20:54:15.954570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.954645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:20.998 [2024-11-26 20:54:15.954839] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:20.998 [2024-11-26 20:54:15.954979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:20.998 [2024-11-26 20:54:15.955136] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:20.998 [2024-11-26 20:54:15.955198] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:20.998 [2024-11-26 20:54:15.955305] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:20.998 [2024-11-26 20:54:15.955359] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:20.998 [2024-11-26 20:54:15.955443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:20.998 [2024-11-26 20:54:15.955480] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:20.998 [2024-11-26 20:54:15.955511] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:20.998 [2024-11-26 20:54:15.955546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.955587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:20.998 [2024-11-26 20:54:15.955642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:26:20.998 [2024-11-26 20:54:15.955677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.955833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.998 [2024-11-26 20:54:15.955905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:20.998 [2024-11-26 20:54:15.955973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:20.998 [2024-11-26 20:54:15.956037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.998 [2024-11-26 20:54:15.956170] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:20.998 [2024-11-26 20:54:15.956244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:20.998 [2024-11-26 20:54:15.956284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.998 [2024-11-26 20:54:15.956360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.956400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:20.998 [2024-11-26 20:54:15.956430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.956472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:20.998 [2024-11-26 20:54:15.956535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:20.998 [2024-11-26 20:54:15.956574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:20.998 [2024-11-26 20:54:15.956604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.998 [2024-11-26 20:54:15.956659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:20.998 [2024-11-26 20:54:15.956724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:20.998 [2024-11-26 20:54:15.956761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:20.998 [2024-11-26 20:54:15.956831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:20.998 [2024-11-26 20:54:15.956869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:20.998 [2024-11-26 20:54:15.956899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.956960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:20.998 [2024-11-26 20:54:15.957021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:20.998 [2024-11-26 20:54:15.957163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:20.998 [2024-11-26 20:54:15.957200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:20.998 [2024-11-26 20:54:15.957232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:20.998 [2024-11-26 20:54:15.957263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:20.998 [2024-11-26 20:54:15.957298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.998 [2024-11-26 20:54:15.957319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:20.998 [2024-11-26 20:54:15.957329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:20.998 [2024-11-26 20:54:15.957340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:20.998 [2024-11-26 20:54:15.957349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:20.998 [2024-11-26 20:54:15.957361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:20.998 [2024-11-26 20:54:15.957370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:20.998 [2024-11-26 20:54:15.957392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:20.998 [2024-11-26 20:54:15.957403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:20.998 [2024-11-26 20:54:15.957425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:20.998 [2024-11-26 20:54:15.957435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:20.998 [2024-11-26 20:54:15.957460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:20.998 [2024-11-26 20:54:15.957475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:20.998 [2024-11-26 20:54:15.957486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:20.998 [2024-11-26 20:54:15.957498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:20.998 [2024-11-26 20:54:15.957507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:20.998 [2024-11-26 20:54:15.957519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:20.998 [2024-11-26 20:54:15.957535] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:20.998 [2024-11-26 20:54:15.957554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.998 [2024-11-26 20:54:15.957566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:20.998 [2024-11-26 20:54:15.957580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:20.998 [2024-11-26 20:54:15.957592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:20.998 [2024-11-26 20:54:15.957605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:20.998 [2024-11-26 20:54:15.957631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:20.998 [2024-11-26 20:54:15.957645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:20.998 [2024-11-26 20:54:15.957656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:20.998 [2024-11-26 20:54:15.957669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:20.999 [2024-11-26 20:54:15.957680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:20.999 [2024-11-26 20:54:15.957695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:20.999 [2024-11-26 20:54:15.957754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:20.999 [2024-11-26 20:54:15.957768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:20.999 [2024-11-26 20:54:15.957793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:20.999 [2024-11-26 20:54:15.957803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:20.999 [2024-11-26 20:54:15.957816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:20.999 [2024-11-26 20:54:15.957828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.999 [2024-11-26 20:54:15.957842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:20.999 [2024-11-26 20:54:15.957853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.719 ms 00:26:20.999 [2024-11-26 20:54:15.957866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.999 [2024-11-26 20:54:15.957934] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:20.999 [2024-11-26 20:54:15.957953] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:24.286 [2024-11-26 20:54:18.715800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.715857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:24.286 [2024-11-26 20:54:18.715874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2757.849 ms 00:26:24.286 [2024-11-26 20:54:18.715888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.754664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.754714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:24.286 [2024-11-26 20:54:18.754732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.446 ms 00:26:24.286 [2024-11-26 20:54:18.754745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.754890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.754906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:24.286 [2024-11-26 20:54:18.754918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:24.286 [2024-11-26 20:54:18.754937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.801543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.801589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.286 [2024-11-26 20:54:18.801603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.546 ms 00:26:24.286 [2024-11-26 20:54:18.801642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.801705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.801720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.286 [2024-11-26 20:54:18.801731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:24.286 [2024-11-26 20:54:18.801754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.802229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.802253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.286 [2024-11-26 20:54:18.802264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:26:24.286 [2024-11-26 20:54:18.802277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.802378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.802396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.286 [2024-11-26 20:54:18.802414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:24.286 [2024-11-26 20:54:18.802430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.822808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.822996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.286 [2024-11-26 20:54:18.823096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.357 ms 00:26:24.286 [2024-11-26 20:54:18.823138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.848057] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:24.286 [2024-11-26 20:54:18.851469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.851603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:24.286 [2024-11-26 20:54:18.851721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.204 ms 00:26:24.286 [2024-11-26 20:54:18.851762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.286 [2024-11-26 20:54:18.936495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.286 [2024-11-26 20:54:18.936754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:24.287 [2024-11-26 20:54:18.936847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.664 ms 00:26:24.287 [2024-11-26 20:54:18.936886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:18.937121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:18.937228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:24.287 [2024-11-26 20:54:18.937304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:26:24.287 [2024-11-26 20:54:18.937335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:18.974206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:18.974368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:24.287 [2024-11-26 20:54:18.974489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.788 ms 00:26:24.287 [2024-11-26 20:54:18.974527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.010422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.010574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:24.287 [2024-11-26 20:54:19.010698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.825 ms 00:26:24.287 [2024-11-26 20:54:19.010712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.011427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.011445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:24.287 [2024-11-26 20:54:19.011462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:26:24.287 [2024-11-26 20:54:19.011473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.110149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.110192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:24.287 [2024-11-26 20:54:19.110214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.614 ms 00:26:24.287 [2024-11-26 20:54:19.110226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.148075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.148225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:24.287 [2024-11-26 20:54:19.148253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.757 ms 00:26:24.287 [2024-11-26 20:54:19.148265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.185418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.185454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:24.287 [2024-11-26 20:54:19.185471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.106 ms 00:26:24.287 [2024-11-26 20:54:19.185482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.222654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.222786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:24.287 [2024-11-26 20:54:19.222812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.110 ms 00:26:24.287 [2024-11-26 20:54:19.222823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.222890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.222903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:24.287 [2024-11-26 20:54:19.222920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:24.287 [2024-11-26 20:54:19.222930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.223037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.287 [2024-11-26 20:54:19.223053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:24.287 [2024-11-26 20:54:19.223066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:24.287 [2024-11-26 20:54:19.223076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.287 [2024-11-26 20:54:19.224185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3286.860 ms, result 0 00:26:24.287 { 00:26:24.287 "name": "ftl0", 00:26:24.287 "uuid": "de9129c4-9c7d-4ebf-b0c4-26f94eaac199" 00:26:24.287 } 00:26:24.287 20:54:19 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:24.287 20:54:19 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:24.857 20:54:19 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:24.857 20:54:19 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:24.857 [2024-11-26 20:54:19.771608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.771697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:24.857 [2024-11-26 20:54:19.771730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:24.857 [2024-11-26 20:54:19.771743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.771772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.857 [2024-11-26 20:54:19.775998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.776032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:24.857 [2024-11-26 20:54:19.776048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:26:24.857 [2024-11-26 20:54:19.776059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.776337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.776352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:24.857 [2024-11-26 20:54:19.776366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:26:24.857 [2024-11-26 20:54:19.776376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.778948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.778970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:24.857 [2024-11-26 20:54:19.778984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:26:24.857 [2024-11-26 20:54:19.778994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.784048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.784083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:24.857 [2024-11-26 20:54:19.784098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.031 ms 00:26:24.857 [2024-11-26 20:54:19.784124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.820872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.820919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:24.857 [2024-11-26 20:54:19.820934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.693 ms 00:26:24.857 [2024-11-26 20:54:19.820960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.843726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.843763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:24.857 [2024-11-26 20:54:19.843780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.715 ms 00:26:24.857 [2024-11-26 20:54:19.843791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.857 [2024-11-26 20:54:19.843943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.857 [2024-11-26 20:54:19.843958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:24.857 [2024-11-26 20:54:19.843972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:26:24.857 [2024-11-26 20:54:19.843982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.118 [2024-11-26 20:54:19.880787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.118 [2024-11-26 20:54:19.880924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:25.118 [2024-11-26 20:54:19.880967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.777 ms 00:26:25.118 [2024-11-26 20:54:19.880977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.118 [2024-11-26 20:54:19.916898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.118 [2024-11-26 20:54:19.916931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:25.118 [2024-11-26 20:54:19.916957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.877 ms 00:26:25.118 [2024-11-26 20:54:19.916983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.118 [2024-11-26 20:54:19.952303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.118 [2024-11-26 20:54:19.952337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:25.118 [2024-11-26 20:54:19.952352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.272 ms 00:26:25.118 [2024-11-26 20:54:19.952362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.118 [2024-11-26 20:54:19.988273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.118 [2024-11-26 20:54:19.988308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:25.118 [2024-11-26 20:54:19.988323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.812 ms 00:26:25.118 [2024-11-26 20:54:19.988333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.118 [2024-11-26 20:54:19.988374] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:25.118 [2024-11-26 20:54:19.988390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.988992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:25.118 [2024-11-26 20:54:19.989159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:25.119 [2024-11-26 20:54:19.989680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:25.119 [2024-11-26 20:54:19.989693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:26:25.119 [2024-11-26 20:54:19.989704] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:25.119 [2024-11-26 20:54:19.989719] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:25.119 [2024-11-26 20:54:19.989732] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:25.119 [2024-11-26 20:54:19.989745] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:25.119 [2024-11-26 20:54:19.989754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:25.119 [2024-11-26 20:54:19.989767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:25.119 [2024-11-26 20:54:19.989777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:25.119 [2024-11-26 20:54:19.989789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:25.119 [2024-11-26 20:54:19.989798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:25.119 [2024-11-26 20:54:19.989810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.119 [2024-11-26 20:54:19.989820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:25.119 [2024-11-26 20:54:19.989834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:26:25.119 [2024-11-26 20:54:19.989847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.010681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.119 [2024-11-26 20:54:20.010715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:25.119 [2024-11-26 20:54:20.010731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.763 ms 00:26:25.119 [2024-11-26 20:54:20.010741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.011346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.119 [2024-11-26 20:54:20.011361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:25.119 [2024-11-26 20:54:20.011378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:26:25.119 [2024-11-26 20:54:20.011388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.080223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.119 [2024-11-26 20:54:20.080275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.119 [2024-11-26 20:54:20.080294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.119 [2024-11-26 20:54:20.080305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.080382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.119 [2024-11-26 20:54:20.080393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.119 [2024-11-26 20:54:20.080410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.119 [2024-11-26 20:54:20.080420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.080523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.119 [2024-11-26 20:54:20.080539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.119 [2024-11-26 20:54:20.080552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.119 [2024-11-26 20:54:20.080562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.119 [2024-11-26 20:54:20.080589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.119 [2024-11-26 20:54:20.080600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.119 [2024-11-26 20:54:20.080628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.119 [2024-11-26 20:54:20.080642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.204532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.204782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.379 [2024-11-26 20:54:20.204812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.204824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.304582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.304799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.379 [2024-11-26 20:54:20.304831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.304843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.304981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.304995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.379 [2024-11-26 20:54:20.305008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.305099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.379 [2024-11-26 20:54:20.305113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.305275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.379 [2024-11-26 20:54:20.305289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.305355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.379 [2024-11-26 20:54:20.305368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.305435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.379 [2024-11-26 20:54:20.305448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.379 [2024-11-26 20:54:20.305521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.379 [2024-11-26 20:54:20.305534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.379 [2024-11-26 20:54:20.305545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.379 [2024-11-26 20:54:20.305696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.047 ms, result 0 00:26:25.379 true 00:26:25.379 20:54:20 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79673 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79673 ']' 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79673 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79673 00:26:25.379 killing process with pid 79673 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79673' 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79673 00:26:25.379 20:54:20 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79673 00:26:30.650 20:54:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:34.842 262144+0 records in 00:26:34.842 262144+0 records out 00:26:34.842 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.16133 s, 258 MB/s 00:26:34.842 20:54:29 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:36.782 20:54:31 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:36.782 [2024-11-26 20:54:31.445588] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:26:36.782 [2024-11-26 20:54:31.447045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79904 ] 00:26:36.782 [2024-11-26 20:54:31.663363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.039 [2024-11-26 20:54:31.820025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.297 [2024-11-26 20:54:32.202570] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.297 [2024-11-26 20:54:32.202656] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.557 [2024-11-26 20:54:32.370309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.370358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:37.557 [2024-11-26 20:54:32.370373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:37.557 [2024-11-26 20:54:32.370384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.370436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.370454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.557 [2024-11-26 20:54:32.370465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:37.557 [2024-11-26 20:54:32.370474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.370496] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:37.557 [2024-11-26 20:54:32.371583] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:37.557 [2024-11-26 20:54:32.371627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.371638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.557 [2024-11-26 20:54:32.371666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:26:37.557 [2024-11-26 20:54:32.371677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.373208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:37.557 [2024-11-26 20:54:32.393419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.393457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:37.557 [2024-11-26 20:54:32.393472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.211 ms 00:26:37.557 [2024-11-26 20:54:32.393483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.393550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.393563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:37.557 [2024-11-26 20:54:32.393574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:37.557 [2024-11-26 20:54:32.393584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.400250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.400280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.557 [2024-11-26 20:54:32.400291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:26:37.557 [2024-11-26 20:54:32.400305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.400383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.400396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.557 [2024-11-26 20:54:32.400407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:37.557 [2024-11-26 20:54:32.400417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.400458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.400470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:37.557 [2024-11-26 20:54:32.400480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:37.557 [2024-11-26 20:54:32.400490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.400519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.557 [2024-11-26 20:54:32.405205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.405236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.557 [2024-11-26 20:54:32.405250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:26:37.557 [2024-11-26 20:54:32.405276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.405306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.405317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:37.557 [2024-11-26 20:54:32.405328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:37.557 [2024-11-26 20:54:32.405338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.405390] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:37.557 [2024-11-26 20:54:32.405413] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:37.557 [2024-11-26 20:54:32.405449] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:37.557 [2024-11-26 20:54:32.405477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:37.557 [2024-11-26 20:54:32.405569] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:37.557 [2024-11-26 20:54:32.405582] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:37.557 [2024-11-26 20:54:32.405595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:37.557 [2024-11-26 20:54:32.405608] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:37.557 [2024-11-26 20:54:32.405638] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:37.557 [2024-11-26 20:54:32.405650] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:37.557 [2024-11-26 20:54:32.405660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:37.557 [2024-11-26 20:54:32.405676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:37.557 [2024-11-26 20:54:32.405686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:37.557 [2024-11-26 20:54:32.405696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.405707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:37.557 [2024-11-26 20:54:32.405717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:26:37.557 [2024-11-26 20:54:32.405727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.405801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.557 [2024-11-26 20:54:32.405812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:37.557 [2024-11-26 20:54:32.405822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:37.557 [2024-11-26 20:54:32.405832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.557 [2024-11-26 20:54:32.405936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:37.557 [2024-11-26 20:54:32.405951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:37.557 [2024-11-26 20:54:32.405962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.557 [2024-11-26 20:54:32.405972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.557 [2024-11-26 20:54:32.405983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:37.557 [2024-11-26 20:54:32.405992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:37.557 [2024-11-26 20:54:32.406011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:37.557 [2024-11-26 20:54:32.406021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.557 [2024-11-26 20:54:32.406040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:37.557 [2024-11-26 20:54:32.406049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:37.557 [2024-11-26 20:54:32.406058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.557 [2024-11-26 20:54:32.406082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:37.557 [2024-11-26 20:54:32.406092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:37.557 [2024-11-26 20:54:32.406101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:37.557 [2024-11-26 20:54:32.406120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:37.557 [2024-11-26 20:54:32.406129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:37.557 [2024-11-26 20:54:32.406147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.557 [2024-11-26 20:54:32.406166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:37.557 [2024-11-26 20:54:32.406175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:37.557 [2024-11-26 20:54:32.406184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.557 [2024-11-26 20:54:32.406194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:37.558 [2024-11-26 20:54:32.406203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.558 [2024-11-26 20:54:32.406221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:37.558 [2024-11-26 20:54:32.406230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.558 [2024-11-26 20:54:32.406248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:37.558 [2024-11-26 20:54:32.406258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.558 [2024-11-26 20:54:32.406275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:37.558 [2024-11-26 20:54:32.406285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:37.558 [2024-11-26 20:54:32.406293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.558 [2024-11-26 20:54:32.406302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:37.558 [2024-11-26 20:54:32.406311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:37.558 [2024-11-26 20:54:32.406320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:37.558 [2024-11-26 20:54:32.406338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:37.558 [2024-11-26 20:54:32.406346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:37.558 [2024-11-26 20:54:32.406365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:37.558 [2024-11-26 20:54:32.406375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.558 [2024-11-26 20:54:32.406385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.558 [2024-11-26 20:54:32.406395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:37.558 [2024-11-26 20:54:32.406404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:37.558 [2024-11-26 20:54:32.406413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:37.558 [2024-11-26 20:54:32.406422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:37.558 [2024-11-26 20:54:32.406431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:37.558 [2024-11-26 20:54:32.406440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:37.558 [2024-11-26 20:54:32.406450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:37.558 [2024-11-26 20:54:32.406462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:37.558 [2024-11-26 20:54:32.406490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:37.558 [2024-11-26 20:54:32.406501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:37.558 [2024-11-26 20:54:32.406511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:37.558 [2024-11-26 20:54:32.406521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:37.558 [2024-11-26 20:54:32.406532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:37.558 [2024-11-26 20:54:32.406542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:37.558 [2024-11-26 20:54:32.406553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:37.558 [2024-11-26 20:54:32.406563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:37.558 [2024-11-26 20:54:32.406573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:37.558 [2024-11-26 20:54:32.406636] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:37.558 [2024-11-26 20:54:32.406648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.558 [2024-11-26 20:54:32.406669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:37.558 [2024-11-26 20:54:32.406680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:37.558 [2024-11-26 20:54:32.406690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:37.558 [2024-11-26 20:54:32.406701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.406711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:37.558 [2024-11-26 20:54:32.406723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:26:37.558 [2024-11-26 20:54:32.406732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.448831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.448870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.558 [2024-11-26 20:54:32.448884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.047 ms 00:26:37.558 [2024-11-26 20:54:32.448930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.449012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.449023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:37.558 [2024-11-26 20:54:32.449033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:37.558 [2024-11-26 20:54:32.449043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.507724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.507921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.558 [2024-11-26 20:54:32.507945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.610 ms 00:26:37.558 [2024-11-26 20:54:32.507956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.507999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.508010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.558 [2024-11-26 20:54:32.508027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:37.558 [2024-11-26 20:54:32.508037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.508522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.508536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.558 [2024-11-26 20:54:32.508546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:26:37.558 [2024-11-26 20:54:32.508556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.508696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.508711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.558 [2024-11-26 20:54:32.508728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:26:37.558 [2024-11-26 20:54:32.508738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.558 [2024-11-26 20:54:32.528577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.558 [2024-11-26 20:54:32.528629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.558 [2024-11-26 20:54:32.528644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.819 ms 00:26:37.558 [2024-11-26 20:54:32.528654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.548206] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:37.817 [2024-11-26 20:54:32.548261] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:37.817 [2024-11-26 20:54:32.548277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.548288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:37.817 [2024-11-26 20:54:32.548300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.515 ms 00:26:37.817 [2024-11-26 20:54:32.548309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.578820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.578987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:37.817 [2024-11-26 20:54:32.579009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.467 ms 00:26:37.817 [2024-11-26 20:54:32.579020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.597584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.597741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:37.817 [2024-11-26 20:54:32.597762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.524 ms 00:26:37.817 [2024-11-26 20:54:32.597772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.618606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.618648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:37.817 [2024-11-26 20:54:32.618661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.798 ms 00:26:37.817 [2024-11-26 20:54:32.618671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.619520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.619552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.817 [2024-11-26 20:54:32.619564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:26:37.817 [2024-11-26 20:54:32.619581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.708504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.708562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:37.817 [2024-11-26 20:54:32.708579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.899 ms 00:26:37.817 [2024-11-26 20:54:32.708601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.719689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:37.817 [2024-11-26 20:54:32.722760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.722791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:37.817 [2024-11-26 20:54:32.722806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.085 ms 00:26:37.817 [2024-11-26 20:54:32.722816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.722922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.722936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:37.817 [2024-11-26 20:54:32.722948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:37.817 [2024-11-26 20:54:32.722958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.817 [2024-11-26 20:54:32.723059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.817 [2024-11-26 20:54:32.723072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:37.818 [2024-11-26 20:54:32.723083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:37.818 [2024-11-26 20:54:32.723092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.818 [2024-11-26 20:54:32.723116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.818 [2024-11-26 20:54:32.723127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:37.818 [2024-11-26 20:54:32.723138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:37.818 [2024-11-26 20:54:32.723148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.818 [2024-11-26 20:54:32.723179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:37.818 [2024-11-26 20:54:32.723194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.818 [2024-11-26 20:54:32.723204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:37.818 [2024-11-26 20:54:32.723214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:37.818 [2024-11-26 20:54:32.723224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.818 [2024-11-26 20:54:32.761497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.818 [2024-11-26 20:54:32.761535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:37.818 [2024-11-26 20:54:32.761550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.249 ms 00:26:37.818 [2024-11-26 20:54:32.761567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.818 [2024-11-26 20:54:32.761657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.818 [2024-11-26 20:54:32.761671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:37.818 [2024-11-26 20:54:32.761682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:37.818 [2024-11-26 20:54:32.761692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.818 [2024-11-26 20:54:32.762857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.017 ms, result 0 00:26:39.194  [2024-11-26T20:54:35.124Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T20:54:36.061Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-26T20:54:36.997Z] Copying: 84/1024 [MB] (28 MBps) [2024-11-26T20:54:37.935Z] Copying: 112/1024 [MB] (27 MBps) [2024-11-26T20:54:38.872Z] Copying: 139/1024 [MB] (27 MBps) [2024-11-26T20:54:39.811Z] Copying: 165/1024 [MB] (25 MBps) [2024-11-26T20:54:41.192Z] Copying: 191/1024 [MB] (25 MBps) [2024-11-26T20:54:42.127Z] Copying: 218/1024 [MB] (26 MBps) [2024-11-26T20:54:43.063Z] Copying: 244/1024 [MB] (26 MBps) [2024-11-26T20:54:43.999Z] Copying: 272/1024 [MB] (27 MBps) [2024-11-26T20:54:44.934Z] Copying: 300/1024 [MB] (27 MBps) [2024-11-26T20:54:45.868Z] Copying: 327/1024 [MB] (27 MBps) [2024-11-26T20:54:46.802Z] Copying: 354/1024 [MB] (26 MBps) [2024-11-26T20:54:48.178Z] Copying: 381/1024 [MB] (27 MBps) [2024-11-26T20:54:49.115Z] Copying: 410/1024 [MB] (28 MBps) [2024-11-26T20:54:50.051Z] Copying: 437/1024 [MB] (27 MBps) [2024-11-26T20:54:50.986Z] Copying: 463/1024 [MB] (26 MBps) [2024-11-26T20:54:51.977Z] Copying: 491/1024 [MB] (28 MBps) [2024-11-26T20:54:52.912Z] Copying: 520/1024 [MB] (28 MBps) [2024-11-26T20:54:53.847Z] Copying: 548/1024 [MB] (28 MBps) [2024-11-26T20:54:54.781Z] Copying: 576/1024 [MB] (27 MBps) [2024-11-26T20:54:56.156Z] Copying: 603/1024 [MB] (27 MBps) [2024-11-26T20:54:57.091Z] Copying: 632/1024 [MB] (28 MBps) [2024-11-26T20:54:58.027Z] Copying: 660/1024 [MB] (28 MBps) [2024-11-26T20:54:58.965Z] Copying: 689/1024 [MB] (28 MBps) [2024-11-26T20:54:59.901Z] Copying: 717/1024 [MB] (28 MBps) [2024-11-26T20:55:00.837Z] Copying: 745/1024 [MB] (27 MBps) [2024-11-26T20:55:02.212Z] Copying: 773/1024 [MB] (28 MBps) [2024-11-26T20:55:02.780Z] Copying: 801/1024 [MB] (27 MBps) [2024-11-26T20:55:04.152Z] Copying: 829/1024 [MB] (28 MBps) [2024-11-26T20:55:05.083Z] Copying: 858/1024 [MB] (28 MBps) [2024-11-26T20:55:06.029Z] Copying: 887/1024 [MB] (29 MBps) [2024-11-26T20:55:06.977Z] Copying: 915/1024 [MB] (28 MBps) [2024-11-26T20:55:07.910Z] Copying: 943/1024 [MB] (27 MBps) [2024-11-26T20:55:08.845Z] Copying: 971/1024 [MB] (27 MBps) [2024-11-26T20:55:09.779Z] Copying: 999/1024 [MB] (28 MBps) [2024-11-26T20:55:09.779Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 20:55:09.659257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.659314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:14.785 [2024-11-26 20:55:09.659330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:14.785 [2024-11-26 20:55:09.659342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.659366] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:14.785 [2024-11-26 20:55:09.663920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.663958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:14.785 [2024-11-26 20:55:09.663979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:27:14.785 [2024-11-26 20:55:09.663990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.665990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.666030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:14.785 [2024-11-26 20:55:09.666044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:27:14.785 [2024-11-26 20:55:09.666054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.682131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.682296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:14.785 [2024-11-26 20:55:09.682319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.059 ms 00:27:14.785 [2024-11-26 20:55:09.682330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.687432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.687463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:14.785 [2024-11-26 20:55:09.687474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:27:14.785 [2024-11-26 20:55:09.687500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.724669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.724708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:14.785 [2024-11-26 20:55:09.724722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.107 ms 00:27:14.785 [2024-11-26 20:55:09.724733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.746596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.746642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:14.785 [2024-11-26 20:55:09.746656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.824 ms 00:27:14.785 [2024-11-26 20:55:09.746667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.785 [2024-11-26 20:55:09.746794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.785 [2024-11-26 20:55:09.746814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:14.785 [2024-11-26 20:55:09.746825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:14.785 [2024-11-26 20:55:09.746835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.044 [2024-11-26 20:55:09.784899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.044 [2024-11-26 20:55:09.784936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:15.044 [2024-11-26 20:55:09.784949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.047 ms 00:27:15.044 [2024-11-26 20:55:09.784959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.044 [2024-11-26 20:55:09.821420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.044 [2024-11-26 20:55:09.821456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:15.044 [2024-11-26 20:55:09.821470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.423 ms 00:27:15.044 [2024-11-26 20:55:09.821480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.044 [2024-11-26 20:55:09.857772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.044 [2024-11-26 20:55:09.857808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:15.044 [2024-11-26 20:55:09.857821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.254 ms 00:27:15.044 [2024-11-26 20:55:09.857831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.044 [2024-11-26 20:55:09.894061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.044 [2024-11-26 20:55:09.894098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:15.044 [2024-11-26 20:55:09.894111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.150 ms 00:27:15.044 [2024-11-26 20:55:09.894121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.044 [2024-11-26 20:55:09.894157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:15.044 [2024-11-26 20:55:09.894174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:15.044 [2024-11-26 20:55:09.894257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.894993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:15.045 [2024-11-26 20:55:09.895233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:15.046 [2024-11-26 20:55:09.895245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:15.046 [2024-11-26 20:55:09.895255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:15.046 [2024-11-26 20:55:09.895266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:15.046 [2024-11-26 20:55:09.895284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:15.046 [2024-11-26 20:55:09.895298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:27:15.046 [2024-11-26 20:55:09.895308] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:15.046 [2024-11-26 20:55:09.895318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:15.046 [2024-11-26 20:55:09.895328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:15.046 [2024-11-26 20:55:09.895338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:15.046 [2024-11-26 20:55:09.895348] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:15.046 [2024-11-26 20:55:09.895368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:15.046 [2024-11-26 20:55:09.895378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:15.046 [2024-11-26 20:55:09.895387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:15.046 [2024-11-26 20:55:09.895396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:15.046 [2024-11-26 20:55:09.895406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.046 [2024-11-26 20:55:09.895416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:15.046 [2024-11-26 20:55:09.895427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:27:15.046 [2024-11-26 20:55:09.895438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.915901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.046 [2024-11-26 20:55:09.916043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:15.046 [2024-11-26 20:55:09.916063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.422 ms 00:27:15.046 [2024-11-26 20:55:09.916073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.916633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.046 [2024-11-26 20:55:09.916650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:15.046 [2024-11-26 20:55:09.916661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:27:15.046 [2024-11-26 20:55:09.916677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.968722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.046 [2024-11-26 20:55:09.968758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.046 [2024-11-26 20:55:09.968771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.046 [2024-11-26 20:55:09.968799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.968854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.046 [2024-11-26 20:55:09.968866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.046 [2024-11-26 20:55:09.968876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.046 [2024-11-26 20:55:09.968900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.968986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.046 [2024-11-26 20:55:09.968999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.046 [2024-11-26 20:55:09.969009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.046 [2024-11-26 20:55:09.969019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.046 [2024-11-26 20:55:09.969036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.046 [2024-11-26 20:55:09.969046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.046 [2024-11-26 20:55:09.969056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.046 [2024-11-26 20:55:09.969066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.098531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.098597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.305 [2024-11-26 20:55:10.098638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.098649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.305 [2024-11-26 20:55:10.199202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.305 [2024-11-26 20:55:10.199349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.305 [2024-11-26 20:55:10.199423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.305 [2024-11-26 20:55:10.199571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:15.305 [2024-11-26 20:55:10.199667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.305 [2024-11-26 20:55:10.199758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.305 [2024-11-26 20:55:10.199839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.305 [2024-11-26 20:55:10.199849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.305 [2024-11-26 20:55:10.199859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.305 [2024-11-26 20:55:10.199987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.695 ms, result 0 00:27:16.681 00:27:16.681 00:27:16.681 20:55:11 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:16.681 [2024-11-26 20:55:11.448057] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:16.681 [2024-11-26 20:55:11.448233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80306 ] 00:27:16.681 [2024-11-26 20:55:11.639045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.940 [2024-11-26 20:55:11.748953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.200 [2024-11-26 20:55:12.103740] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:17.200 [2024-11-26 20:55:12.103806] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:17.460 [2024-11-26 20:55:12.264660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.264711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:17.460 [2024-11-26 20:55:12.264727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:17.460 [2024-11-26 20:55:12.264738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.264785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.264800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:17.460 [2024-11-26 20:55:12.264811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:17.460 [2024-11-26 20:55:12.264821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.264842] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:17.460 [2024-11-26 20:55:12.265884] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:17.460 [2024-11-26 20:55:12.265912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.265923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:17.460 [2024-11-26 20:55:12.265935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:27:17.460 [2024-11-26 20:55:12.265945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.267436] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:17.460 [2024-11-26 20:55:12.286970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.287012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:17.460 [2024-11-26 20:55:12.287027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.535 ms 00:27:17.460 [2024-11-26 20:55:12.287037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.287115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.287129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:17.460 [2024-11-26 20:55:12.287141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:17.460 [2024-11-26 20:55:12.287150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.293812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.293841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:17.460 [2024-11-26 20:55:12.293869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.586 ms 00:27:17.460 [2024-11-26 20:55:12.293888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.293994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.294008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:17.460 [2024-11-26 20:55:12.294019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:17.460 [2024-11-26 20:55:12.294029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.294076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.294088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:17.460 [2024-11-26 20:55:12.294098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:17.460 [2024-11-26 20:55:12.294108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.294141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:17.460 [2024-11-26 20:55:12.299072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.299105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:17.460 [2024-11-26 20:55:12.299124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.937 ms 00:27:17.460 [2024-11-26 20:55:12.299134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.299164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.299176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:17.460 [2024-11-26 20:55:12.299186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:17.460 [2024-11-26 20:55:12.299197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.299252] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:17.460 [2024-11-26 20:55:12.299280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:17.460 [2024-11-26 20:55:12.299317] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:17.460 [2024-11-26 20:55:12.299341] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:17.460 [2024-11-26 20:55:12.299435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:17.460 [2024-11-26 20:55:12.299448] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:17.460 [2024-11-26 20:55:12.299461] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:17.460 [2024-11-26 20:55:12.299474] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:17.460 [2024-11-26 20:55:12.299486] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:17.460 [2024-11-26 20:55:12.299497] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:17.460 [2024-11-26 20:55:12.299507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:17.460 [2024-11-26 20:55:12.299524] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:17.460 [2024-11-26 20:55:12.299534] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:17.460 [2024-11-26 20:55:12.299544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.299554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:17.460 [2024-11-26 20:55:12.299565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:27:17.460 [2024-11-26 20:55:12.299575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.299669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.460 [2024-11-26 20:55:12.299681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:17.460 [2024-11-26 20:55:12.299692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:17.460 [2024-11-26 20:55:12.299702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.460 [2024-11-26 20:55:12.299808] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:17.460 [2024-11-26 20:55:12.299823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:17.460 [2024-11-26 20:55:12.299834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:17.460 [2024-11-26 20:55:12.299844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.460 [2024-11-26 20:55:12.299854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:17.460 [2024-11-26 20:55:12.299864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:17.460 [2024-11-26 20:55:12.299873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:17.460 [2024-11-26 20:55:12.299882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:17.460 [2024-11-26 20:55:12.299892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:17.460 [2024-11-26 20:55:12.299902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:17.460 [2024-11-26 20:55:12.299911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:17.460 [2024-11-26 20:55:12.299921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:17.460 [2024-11-26 20:55:12.299930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:17.460 [2024-11-26 20:55:12.299953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:17.460 [2024-11-26 20:55:12.299963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:17.460 [2024-11-26 20:55:12.299972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.460 [2024-11-26 20:55:12.299981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:17.461 [2024-11-26 20:55:12.299990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:17.461 [2024-11-26 20:55:12.299999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:17.461 [2024-11-26 20:55:12.300018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:17.461 [2024-11-26 20:55:12.300046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:17.461 [2024-11-26 20:55:12.300073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:17.461 [2024-11-26 20:55:12.300101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:17.461 [2024-11-26 20:55:12.300128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:17.461 [2024-11-26 20:55:12.300145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:17.461 [2024-11-26 20:55:12.300155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:17.461 [2024-11-26 20:55:12.300164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:17.461 [2024-11-26 20:55:12.300173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:17.461 [2024-11-26 20:55:12.300182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:17.461 [2024-11-26 20:55:12.300191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:17.461 [2024-11-26 20:55:12.300211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:17.461 [2024-11-26 20:55:12.300220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300229] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:17.461 [2024-11-26 20:55:12.300238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:17.461 [2024-11-26 20:55:12.300248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.461 [2024-11-26 20:55:12.300269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:17.461 [2024-11-26 20:55:12.300278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:17.461 [2024-11-26 20:55:12.300287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:17.461 [2024-11-26 20:55:12.300296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:17.461 [2024-11-26 20:55:12.300306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:17.461 [2024-11-26 20:55:12.300315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:17.461 [2024-11-26 20:55:12.300326] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:17.461 [2024-11-26 20:55:12.300338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:17.461 [2024-11-26 20:55:12.300367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:17.461 [2024-11-26 20:55:12.300377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:17.461 [2024-11-26 20:55:12.300388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:17.461 [2024-11-26 20:55:12.300398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:17.461 [2024-11-26 20:55:12.300409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:17.461 [2024-11-26 20:55:12.300419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:17.461 [2024-11-26 20:55:12.300429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:17.461 [2024-11-26 20:55:12.300439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:17.461 [2024-11-26 20:55:12.300450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:17.461 [2024-11-26 20:55:12.300501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:17.461 [2024-11-26 20:55:12.300513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:17.461 [2024-11-26 20:55:12.300534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:17.461 [2024-11-26 20:55:12.300544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:17.461 [2024-11-26 20:55:12.300554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:17.461 [2024-11-26 20:55:12.300565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.300576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:17.461 [2024-11-26 20:55:12.300587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:27:17.461 [2024-11-26 20:55:12.300596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.340552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.340592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:17.461 [2024-11-26 20:55:12.340607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.895 ms 00:27:17.461 [2024-11-26 20:55:12.340631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.340726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.340737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:17.461 [2024-11-26 20:55:12.340748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:17.461 [2024-11-26 20:55:12.340759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.396681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.396725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:17.461 [2024-11-26 20:55:12.396740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.845 ms 00:27:17.461 [2024-11-26 20:55:12.396752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.396804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.396816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:17.461 [2024-11-26 20:55:12.396831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:17.461 [2024-11-26 20:55:12.396841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.397330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.397345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:17.461 [2024-11-26 20:55:12.397357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:17.461 [2024-11-26 20:55:12.397367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.397483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.397496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:17.461 [2024-11-26 20:55:12.397512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:17.461 [2024-11-26 20:55:12.397522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.415704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.415746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:17.461 [2024-11-26 20:55:12.415762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:27:17.461 [2024-11-26 20:55:12.415772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.461 [2024-11-26 20:55:12.435514] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:17.461 [2024-11-26 20:55:12.435551] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:17.461 [2024-11-26 20:55:12.435566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.461 [2024-11-26 20:55:12.435578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:17.461 [2024-11-26 20:55:12.435605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.669 ms 00:27:17.461 [2024-11-26 20:55:12.435637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.465554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.465592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:17.721 [2024-11-26 20:55:12.465605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.871 ms 00:27:17.721 [2024-11-26 20:55:12.465641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.484350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.484399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:17.721 [2024-11-26 20:55:12.484428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:27:17.721 [2024-11-26 20:55:12.484438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.502649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.502683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:17.721 [2024-11-26 20:55:12.502696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.172 ms 00:27:17.721 [2024-11-26 20:55:12.502706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.503468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.503494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:17.721 [2024-11-26 20:55:12.503510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:27:17.721 [2024-11-26 20:55:12.503520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.591000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.591093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:17.721 [2024-11-26 20:55:12.591117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.456 ms 00:27:17.721 [2024-11-26 20:55:12.591128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.602026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:17.721 [2024-11-26 20:55:12.604778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.604808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:17.721 [2024-11-26 20:55:12.604822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.601 ms 00:27:17.721 [2024-11-26 20:55:12.604832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.604925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.604939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:17.721 [2024-11-26 20:55:12.604956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:17.721 [2024-11-26 20:55:12.604966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.605059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.605073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:17.721 [2024-11-26 20:55:12.605084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:17.721 [2024-11-26 20:55:12.605095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.605118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.605130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:17.721 [2024-11-26 20:55:12.605141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:17.721 [2024-11-26 20:55:12.605150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.605186] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:17.721 [2024-11-26 20:55:12.605198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.605209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:17.721 [2024-11-26 20:55:12.605220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:17.721 [2024-11-26 20:55:12.605230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.641841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.641877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:17.721 [2024-11-26 20:55:12.641897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:27:17.721 [2024-11-26 20:55:12.641908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.641998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.721 [2024-11-26 20:55:12.642012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:17.721 [2024-11-26 20:55:12.642031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:17.721 [2024-11-26 20:55:12.642041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.721 [2024-11-26 20:55:12.643144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.062 ms, result 0 00:27:19.095  [2024-11-26T20:55:15.024Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T20:55:15.959Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-26T20:55:16.949Z] Copying: 89/1024 [MB] (29 MBps) [2024-11-26T20:55:17.884Z] Copying: 118/1024 [MB] (29 MBps) [2024-11-26T20:55:19.259Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-26T20:55:20.195Z] Copying: 178/1024 [MB] (29 MBps) [2024-11-26T20:55:21.128Z] Copying: 208/1024 [MB] (29 MBps) [2024-11-26T20:55:22.063Z] Copying: 238/1024 [MB] (30 MBps) [2024-11-26T20:55:22.997Z] Copying: 269/1024 [MB] (30 MBps) [2024-11-26T20:55:23.931Z] Copying: 299/1024 [MB] (30 MBps) [2024-11-26T20:55:24.864Z] Copying: 328/1024 [MB] (29 MBps) [2024-11-26T20:55:26.240Z] Copying: 358/1024 [MB] (29 MBps) [2024-11-26T20:55:27.177Z] Copying: 387/1024 [MB] (29 MBps) [2024-11-26T20:55:28.113Z] Copying: 417/1024 [MB] (29 MBps) [2024-11-26T20:55:29.049Z] Copying: 447/1024 [MB] (29 MBps) [2024-11-26T20:55:29.992Z] Copying: 477/1024 [MB] (29 MBps) [2024-11-26T20:55:30.927Z] Copying: 506/1024 [MB] (29 MBps) [2024-11-26T20:55:32.302Z] Copying: 536/1024 [MB] (29 MBps) [2024-11-26T20:55:32.870Z] Copying: 566/1024 [MB] (30 MBps) [2024-11-26T20:55:34.245Z] Copying: 596/1024 [MB] (29 MBps) [2024-11-26T20:55:35.180Z] Copying: 626/1024 [MB] (29 MBps) [2024-11-26T20:55:36.115Z] Copying: 656/1024 [MB] (30 MBps) [2024-11-26T20:55:37.051Z] Copying: 686/1024 [MB] (29 MBps) [2024-11-26T20:55:37.986Z] Copying: 715/1024 [MB] (29 MBps) [2024-11-26T20:55:38.919Z] Copying: 745/1024 [MB] (29 MBps) [2024-11-26T20:55:40.319Z] Copying: 775/1024 [MB] (29 MBps) [2024-11-26T20:55:40.902Z] Copying: 804/1024 [MB] (29 MBps) [2024-11-26T20:55:42.278Z] Copying: 834/1024 [MB] (29 MBps) [2024-11-26T20:55:43.214Z] Copying: 864/1024 [MB] (30 MBps) [2024-11-26T20:55:44.150Z] Copying: 894/1024 [MB] (30 MBps) [2024-11-26T20:55:45.084Z] Copying: 924/1024 [MB] (29 MBps) [2024-11-26T20:55:46.020Z] Copying: 954/1024 [MB] (30 MBps) [2024-11-26T20:55:46.956Z] Copying: 984/1024 [MB] (29 MBps) [2024-11-26T20:55:47.215Z] Copying: 1013/1024 [MB] (29 MBps) [2024-11-26T20:55:47.783Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-26 20:55:47.568124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.568191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:52.789 [2024-11-26 20:55:47.568217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:52.789 [2024-11-26 20:55:47.568231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.568260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:52.789 [2024-11-26 20:55:47.574221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.574273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:52.789 [2024-11-26 20:55:47.574287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.937 ms 00:27:52.789 [2024-11-26 20:55:47.574298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.574544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.574563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:52.789 [2024-11-26 20:55:47.574575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:52.789 [2024-11-26 20:55:47.574587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.578032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.578059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:52.789 [2024-11-26 20:55:47.578071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:27:52.789 [2024-11-26 20:55:47.578088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.584448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.584482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:52.789 [2024-11-26 20:55:47.584495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.335 ms 00:27:52.789 [2024-11-26 20:55:47.584506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.624553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.624600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:52.789 [2024-11-26 20:55:47.624624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.963 ms 00:27:52.789 [2024-11-26 20:55:47.624635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.645183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.645219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:52.789 [2024-11-26 20:55:47.645233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.503 ms 00:27:52.789 [2024-11-26 20:55:47.645243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.645377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.645391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:52.789 [2024-11-26 20:55:47.645402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:52.789 [2024-11-26 20:55:47.645427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.681661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.681694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:52.789 [2024-11-26 20:55:47.681723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.217 ms 00:27:52.789 [2024-11-26 20:55:47.681744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.717461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.717492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:52.789 [2024-11-26 20:55:47.717521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.681 ms 00:27:52.789 [2024-11-26 20:55:47.717531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.789 [2024-11-26 20:55:47.752607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.789 [2024-11-26 20:55:47.752646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:52.789 [2024-11-26 20:55:47.752659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.040 ms 00:27:52.789 [2024-11-26 20:55:47.752668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.048 [2024-11-26 20:55:47.788150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.048 [2024-11-26 20:55:47.788183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:53.048 [2024-11-26 20:55:47.788196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.407 ms 00:27:53.048 [2024-11-26 20:55:47.788206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.048 [2024-11-26 20:55:47.788243] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:53.048 [2024-11-26 20:55:47.788266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:53.048 [2024-11-26 20:55:47.788495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.788990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:53.049 [2024-11-26 20:55:47.789343] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:53.049 [2024-11-26 20:55:47.789353] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:27:53.049 [2024-11-26 20:55:47.789364] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:53.049 [2024-11-26 20:55:47.789373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:53.049 [2024-11-26 20:55:47.789383] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:53.049 [2024-11-26 20:55:47.789394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:53.049 [2024-11-26 20:55:47.789413] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:53.049 [2024-11-26 20:55:47.789424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:53.049 [2024-11-26 20:55:47.789433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:53.049 [2024-11-26 20:55:47.789442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:53.049 [2024-11-26 20:55:47.789452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:53.050 [2024-11-26 20:55:47.789461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.050 [2024-11-26 20:55:47.789471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:53.050 [2024-11-26 20:55:47.789481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:27:53.050 [2024-11-26 20:55:47.789494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.809557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.050 [2024-11-26 20:55:47.809587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:53.050 [2024-11-26 20:55:47.809600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.010 ms 00:27:53.050 [2024-11-26 20:55:47.809609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.810179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.050 [2024-11-26 20:55:47.810196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:53.050 [2024-11-26 20:55:47.810212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:27:53.050 [2024-11-26 20:55:47.810222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.860769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.050 [2024-11-26 20:55:47.860801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:53.050 [2024-11-26 20:55:47.860814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.050 [2024-11-26 20:55:47.860825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.860876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.050 [2024-11-26 20:55:47.860887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:53.050 [2024-11-26 20:55:47.860901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.050 [2024-11-26 20:55:47.860910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.860971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.050 [2024-11-26 20:55:47.860984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:53.050 [2024-11-26 20:55:47.860995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.050 [2024-11-26 20:55:47.861005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.861021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.050 [2024-11-26 20:55:47.861031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:53.050 [2024-11-26 20:55:47.861041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.050 [2024-11-26 20:55:47.861056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.050 [2024-11-26 20:55:47.984226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.050 [2024-11-26 20:55:47.984284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:53.050 [2024-11-26 20:55:47.984298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.050 [2024-11-26 20:55:47.984308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.085904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.085951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:53.308 [2024-11-26 20:55:48.085972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.085983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.308 [2024-11-26 20:55:48.086108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.308 [2024-11-26 20:55:48.086185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.308 [2024-11-26 20:55:48.086339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:53.308 [2024-11-26 20:55:48.086408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.308 [2024-11-26 20:55:48.086481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.308 [2024-11-26 20:55:48.086546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.308 [2024-11-26 20:55:48.086556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.308 [2024-11-26 20:55:48.086566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.308 [2024-11-26 20:55:48.086711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 518.545 ms, result 0 00:27:54.245 00:27:54.245 00:27:54.245 20:55:49 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:56.149 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:56.150 20:55:50 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:56.150 [2024-11-26 20:55:51.002413] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:27:56.150 [2024-11-26 20:55:51.002549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80704 ] 00:27:56.409 [2024-11-26 20:55:51.182247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.409 [2024-11-26 20:55:51.335521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.977 [2024-11-26 20:55:51.698443] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.977 [2024-11-26 20:55:51.698502] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.977 [2024-11-26 20:55:51.859413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.859466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:56.977 [2024-11-26 20:55:51.859482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:56.977 [2024-11-26 20:55:51.859508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.859557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.859573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.977 [2024-11-26 20:55:51.859583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:56.977 [2024-11-26 20:55:51.859594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.859634] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:56.977 [2024-11-26 20:55:51.860585] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:56.977 [2024-11-26 20:55:51.860630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.860643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.977 [2024-11-26 20:55:51.860654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:27:56.977 [2024-11-26 20:55:51.860664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.862089] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:56.977 [2024-11-26 20:55:51.881531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.881571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:56.977 [2024-11-26 20:55:51.881586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.443 ms 00:27:56.977 [2024-11-26 20:55:51.881597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.881694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.881708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:56.977 [2024-11-26 20:55:51.881720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:56.977 [2024-11-26 20:55:51.881730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.888385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.888417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.977 [2024-11-26 20:55:51.888430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.582 ms 00:27:56.977 [2024-11-26 20:55:51.888444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.888525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.888538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.977 [2024-11-26 20:55:51.888549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:56.977 [2024-11-26 20:55:51.888559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.888604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.888626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:56.977 [2024-11-26 20:55:51.888637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:56.977 [2024-11-26 20:55:51.888647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.888677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:56.977 [2024-11-26 20:55:51.893495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.893526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.977 [2024-11-26 20:55:51.893541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:27:56.977 [2024-11-26 20:55:51.893572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.977 [2024-11-26 20:55:51.893603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.977 [2024-11-26 20:55:51.893614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:56.977 [2024-11-26 20:55:51.893625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:56.978 [2024-11-26 20:55:51.893644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.978 [2024-11-26 20:55:51.893699] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:56.978 [2024-11-26 20:55:51.893724] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:56.978 [2024-11-26 20:55:51.893762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:56.978 [2024-11-26 20:55:51.893786] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:56.978 [2024-11-26 20:55:51.893879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:56.978 [2024-11-26 20:55:51.893893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:56.978 [2024-11-26 20:55:51.893906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:56.978 [2024-11-26 20:55:51.893919] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:56.978 [2024-11-26 20:55:51.893931] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:56.978 [2024-11-26 20:55:51.893942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:56.978 [2024-11-26 20:55:51.893953] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:56.978 [2024-11-26 20:55:51.893966] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:56.978 [2024-11-26 20:55:51.893976] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:56.978 [2024-11-26 20:55:51.893986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.978 [2024-11-26 20:55:51.893996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:56.978 [2024-11-26 20:55:51.894006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:56.978 [2024-11-26 20:55:51.894016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.978 [2024-11-26 20:55:51.894092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.978 [2024-11-26 20:55:51.894104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:56.978 [2024-11-26 20:55:51.894114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:56.978 [2024-11-26 20:55:51.894124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.978 [2024-11-26 20:55:51.894222] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:56.978 [2024-11-26 20:55:51.894236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:56.978 [2024-11-26 20:55:51.894247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:56.978 [2024-11-26 20:55:51.894277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:56.978 [2024-11-26 20:55:51.894305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.978 [2024-11-26 20:55:51.894324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:56.978 [2024-11-26 20:55:51.894333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:56.978 [2024-11-26 20:55:51.894344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.978 [2024-11-26 20:55:51.894364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:56.978 [2024-11-26 20:55:51.894374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:56.978 [2024-11-26 20:55:51.894383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:56.978 [2024-11-26 20:55:51.894402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:56.978 [2024-11-26 20:55:51.894430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:56.978 [2024-11-26 20:55:51.894458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:56.978 [2024-11-26 20:55:51.894486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:56.978 [2024-11-26 20:55:51.894513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:56.978 [2024-11-26 20:55:51.894540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.978 [2024-11-26 20:55:51.894558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:56.978 [2024-11-26 20:55:51.894567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:56.978 [2024-11-26 20:55:51.894576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.978 [2024-11-26 20:55:51.894585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:56.978 [2024-11-26 20:55:51.894594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:56.978 [2024-11-26 20:55:51.894603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:56.978 [2024-11-26 20:55:51.894633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:56.978 [2024-11-26 20:55:51.894642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:56.978 [2024-11-26 20:55:51.894663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:56.978 [2024-11-26 20:55:51.894673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.978 [2024-11-26 20:55:51.894693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:56.978 [2024-11-26 20:55:51.894702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:56.978 [2024-11-26 20:55:51.894712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:56.978 [2024-11-26 20:55:51.894722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:56.978 [2024-11-26 20:55:51.894731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:56.978 [2024-11-26 20:55:51.894740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:56.978 [2024-11-26 20:55:51.894751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:56.978 [2024-11-26 20:55:51.894763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:56.978 [2024-11-26 20:55:51.894790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:56.978 [2024-11-26 20:55:51.894800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:56.978 [2024-11-26 20:55:51.894810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:56.978 [2024-11-26 20:55:51.894821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:56.978 [2024-11-26 20:55:51.894831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:56.978 [2024-11-26 20:55:51.894842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:56.978 [2024-11-26 20:55:51.894852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:56.978 [2024-11-26 20:55:51.894862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:56.978 [2024-11-26 20:55:51.894872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:56.978 [2024-11-26 20:55:51.894924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:56.978 [2024-11-26 20:55:51.894936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.978 [2024-11-26 20:55:51.894957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:56.978 [2024-11-26 20:55:51.894967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:56.979 [2024-11-26 20:55:51.894977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:56.979 [2024-11-26 20:55:51.894988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.979 [2024-11-26 20:55:51.894999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:56.979 [2024-11-26 20:55:51.895009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:27:56.979 [2024-11-26 20:55:51.895018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.979 [2024-11-26 20:55:51.933666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.979 [2024-11-26 20:55:51.933707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.979 [2024-11-26 20:55:51.933721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.600 ms 00:27:56.979 [2024-11-26 20:55:51.933735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.979 [2024-11-26 20:55:51.933824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.979 [2024-11-26 20:55:51.933836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.979 [2024-11-26 20:55:51.933846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:56.979 [2024-11-26 20:55:51.933856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:51.991334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:51.991373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.238 [2024-11-26 20:55:51.991386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.405 ms 00:27:57.238 [2024-11-26 20:55:51.991412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:51.991459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:51.991470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.238 [2024-11-26 20:55:51.991485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:57.238 [2024-11-26 20:55:51.991495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:51.992010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:51.992033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.238 [2024-11-26 20:55:51.992045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:27:57.238 [2024-11-26 20:55:51.992055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:51.992176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:51.992189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.238 [2024-11-26 20:55:51.992206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:57.238 [2024-11-26 20:55:51.992216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.011404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.011441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.238 [2024-11-26 20:55:52.011471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.166 ms 00:27:57.238 [2024-11-26 20:55:52.011482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.030805] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:57.238 [2024-11-26 20:55:52.030840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.238 [2024-11-26 20:55:52.030871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.030882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.238 [2024-11-26 20:55:52.030893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.272 ms 00:27:57.238 [2024-11-26 20:55:52.030903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.059844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.059880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.238 [2024-11-26 20:55:52.059894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.898 ms 00:27:57.238 [2024-11-26 20:55:52.059905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.078402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.078435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.238 [2024-11-26 20:55:52.078463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.422 ms 00:27:57.238 [2024-11-26 20:55:52.078473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.096377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.096412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:57.238 [2024-11-26 20:55:52.096440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:27:57.238 [2024-11-26 20:55:52.096450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.097251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.097283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:57.238 [2024-11-26 20:55:52.097302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:27:57.238 [2024-11-26 20:55:52.097312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.182289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.182353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:57.238 [2024-11-26 20:55:52.182392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.951 ms 00:27:57.238 [2024-11-26 20:55:52.182403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.193230] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:57.238 [2024-11-26 20:55:52.195999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.196028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.238 [2024-11-26 20:55:52.196058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.537 ms 00:27:57.238 [2024-11-26 20:55:52.196068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.196158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.196171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.238 [2024-11-26 20:55:52.196186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.238 [2024-11-26 20:55:52.196197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.196270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.196281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.238 [2024-11-26 20:55:52.196292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:57.238 [2024-11-26 20:55:52.196303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.196324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.196335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.238 [2024-11-26 20:55:52.196346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.238 [2024-11-26 20:55:52.196356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.238 [2024-11-26 20:55:52.196394] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.238 [2024-11-26 20:55:52.196406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.238 [2024-11-26 20:55:52.196417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.238 [2024-11-26 20:55:52.196428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:57.238 [2024-11-26 20:55:52.196439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.497 [2024-11-26 20:55:52.233609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.497 [2024-11-26 20:55:52.233655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.497 [2024-11-26 20:55:52.233676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.150 ms 00:27:57.497 [2024-11-26 20:55:52.233686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.497 [2024-11-26 20:55:52.233763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.497 [2024-11-26 20:55:52.233775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.497 [2024-11-26 20:55:52.233787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:57.497 [2024-11-26 20:55:52.233797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.497 [2024-11-26 20:55:52.235081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.164 ms, result 0 00:27:58.431  [2024-11-26T20:55:54.362Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-26T20:55:55.297Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-26T20:55:56.686Z] Copying: 86/1024 [MB] (28 MBps) [2024-11-26T20:55:57.253Z] Copying: 115/1024 [MB] (28 MBps) [2024-11-26T20:55:58.630Z] Copying: 143/1024 [MB] (28 MBps) [2024-11-26T20:55:59.566Z] Copying: 172/1024 [MB] (28 MBps) [2024-11-26T20:56:00.501Z] Copying: 200/1024 [MB] (28 MBps) [2024-11-26T20:56:01.435Z] Copying: 229/1024 [MB] (28 MBps) [2024-11-26T20:56:02.371Z] Copying: 258/1024 [MB] (28 MBps) [2024-11-26T20:56:03.306Z] Copying: 286/1024 [MB] (28 MBps) [2024-11-26T20:56:04.682Z] Copying: 314/1024 [MB] (28 MBps) [2024-11-26T20:56:05.249Z] Copying: 342/1024 [MB] (28 MBps) [2024-11-26T20:56:06.627Z] Copying: 371/1024 [MB] (28 MBps) [2024-11-26T20:56:07.560Z] Copying: 400/1024 [MB] (28 MBps) [2024-11-26T20:56:08.498Z] Copying: 429/1024 [MB] (29 MBps) [2024-11-26T20:56:09.434Z] Copying: 458/1024 [MB] (28 MBps) [2024-11-26T20:56:10.371Z] Copying: 486/1024 [MB] (28 MBps) [2024-11-26T20:56:11.312Z] Copying: 515/1024 [MB] (28 MBps) [2024-11-26T20:56:12.250Z] Copying: 544/1024 [MB] (29 MBps) [2024-11-26T20:56:13.627Z] Copying: 574/1024 [MB] (29 MBps) [2024-11-26T20:56:14.563Z] Copying: 603/1024 [MB] (29 MBps) [2024-11-26T20:56:15.500Z] Copying: 632/1024 [MB] (28 MBps) [2024-11-26T20:56:16.435Z] Copying: 660/1024 [MB] (28 MBps) [2024-11-26T20:56:17.370Z] Copying: 689/1024 [MB] (29 MBps) [2024-11-26T20:56:18.307Z] Copying: 718/1024 [MB] (28 MBps) [2024-11-26T20:56:19.683Z] Copying: 747/1024 [MB] (28 MBps) [2024-11-26T20:56:20.249Z] Copying: 775/1024 [MB] (28 MBps) [2024-11-26T20:56:21.624Z] Copying: 804/1024 [MB] (28 MBps) [2024-11-26T20:56:22.558Z] Copying: 833/1024 [MB] (29 MBps) [2024-11-26T20:56:23.495Z] Copying: 862/1024 [MB] (29 MBps) [2024-11-26T20:56:24.431Z] Copying: 891/1024 [MB] (28 MBps) [2024-11-26T20:56:25.366Z] Copying: 920/1024 [MB] (28 MBps) [2024-11-26T20:56:26.303Z] Copying: 949/1024 [MB] (28 MBps) [2024-11-26T20:56:27.681Z] Copying: 978/1024 [MB] (29 MBps) [2024-11-26T20:56:28.250Z] Copying: 1007/1024 [MB] (29 MBps) [2024-11-26T20:56:28.820Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-26T20:56:28.820Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-26 20:56:28.668277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.826 [2024-11-26 20:56:28.668522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:33.826 [2024-11-26 20:56:28.668561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:33.826 [2024-11-26 20:56:28.668574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.826 [2024-11-26 20:56:28.671753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:33.826 [2024-11-26 20:56:28.677981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.826 [2024-11-26 20:56:28.678018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:33.826 [2024-11-26 20:56:28.678033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.182 ms 00:28:33.826 [2024-11-26 20:56:28.678044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.826 [2024-11-26 20:56:28.688294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.827 [2024-11-26 20:56:28.688334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:33.827 [2024-11-26 20:56:28.688348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.282 ms 00:28:33.827 [2024-11-26 20:56:28.688365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.827 [2024-11-26 20:56:28.710384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.827 [2024-11-26 20:56:28.710428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:33.827 [2024-11-26 20:56:28.710443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.000 ms 00:28:33.827 [2024-11-26 20:56:28.710456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.827 [2024-11-26 20:56:28.715593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.827 [2024-11-26 20:56:28.715722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:33.827 [2024-11-26 20:56:28.715736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:28:33.827 [2024-11-26 20:56:28.715752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.827 [2024-11-26 20:56:28.752185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.827 [2024-11-26 20:56:28.752224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:33.827 [2024-11-26 20:56:28.752239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.345 ms 00:28:33.827 [2024-11-26 20:56:28.752265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.827 [2024-11-26 20:56:28.773280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.827 [2024-11-26 20:56:28.773314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:33.827 [2024-11-26 20:56:28.773344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.976 ms 00:28:33.827 [2024-11-26 20:56:28.773355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:28.882259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.104 [2024-11-26 20:56:28.882317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:34.104 [2024-11-26 20:56:28.882332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.862 ms 00:28:34.104 [2024-11-26 20:56:28.882343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:28.919024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.104 [2024-11-26 20:56:28.919060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:34.104 [2024-11-26 20:56:28.919089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.662 ms 00:28:34.104 [2024-11-26 20:56:28.919100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:28.954998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.104 [2024-11-26 20:56:28.955033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:34.104 [2024-11-26 20:56:28.955062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.860 ms 00:28:34.104 [2024-11-26 20:56:28.955072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:28.990532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.104 [2024-11-26 20:56:28.990569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:34.104 [2024-11-26 20:56:28.990583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.423 ms 00:28:34.104 [2024-11-26 20:56:28.990593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:29.026196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.104 [2024-11-26 20:56:29.026241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:34.104 [2024-11-26 20:56:29.026270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.515 ms 00:28:34.104 [2024-11-26 20:56:29.026280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.104 [2024-11-26 20:56:29.026316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:34.104 [2024-11-26 20:56:29.026332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116992 / 261120 wr_cnt: 1 state: open 00:28:34.104 [2024-11-26 20:56:29.026345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:34.104 [2024-11-26 20:56:29.026682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.026996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:34.105 [2024-11-26 20:56:29.027412] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:34.105 [2024-11-26 20:56:29.027422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:28:34.105 [2024-11-26 20:56:29.027432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116992 00:28:34.105 [2024-11-26 20:56:29.027442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117952 00:28:34.105 [2024-11-26 20:56:29.027452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116992 00:28:34.105 [2024-11-26 20:56:29.027463] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:28:34.105 [2024-11-26 20:56:29.027487] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:34.105 [2024-11-26 20:56:29.027497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:34.105 [2024-11-26 20:56:29.027507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:34.105 [2024-11-26 20:56:29.027516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:34.105 [2024-11-26 20:56:29.027525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:34.105 [2024-11-26 20:56:29.027535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.106 [2024-11-26 20:56:29.027545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:34.106 [2024-11-26 20:56:29.027556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:28:34.106 [2024-11-26 20:56:29.027565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.106 [2024-11-26 20:56:29.048047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.106 [2024-11-26 20:56:29.048079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:34.106 [2024-11-26 20:56:29.048098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.416 ms 00:28:34.106 [2024-11-26 20:56:29.048108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.106 [2024-11-26 20:56:29.048693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.106 [2024-11-26 20:56:29.048711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:34.106 [2024-11-26 20:56:29.048722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:28:34.106 [2024-11-26 20:56:29.048732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.100277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.100312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.365 [2024-11-26 20:56:29.100325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.100336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.100396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.100408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.365 [2024-11-26 20:56:29.100419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.100429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.100491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.100508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.365 [2024-11-26 20:56:29.100519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.100529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.100545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.100556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.365 [2024-11-26 20:56:29.100566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.100576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.226486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.226551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.365 [2024-11-26 20:56:29.226565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.226591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.365 [2024-11-26 20:56:29.329414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:34.365 [2024-11-26 20:56:29.329559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:34.365 [2024-11-26 20:56:29.329665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:34.365 [2024-11-26 20:56:29.329805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:34.365 [2024-11-26 20:56:29.329877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.329926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.329937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:34.365 [2024-11-26 20:56:29.329947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.329957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.330006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:34.365 [2024-11-26 20:56:29.330018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:34.365 [2024-11-26 20:56:29.330028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:34.365 [2024-11-26 20:56:29.330038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.365 [2024-11-26 20:56:29.330160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 663.078 ms, result 0 00:28:36.268 00:28:36.268 00:28:36.268 20:56:31 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:36.268 [2024-11-26 20:56:31.151090] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:28:36.268 [2024-11-26 20:56:31.151265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:28:36.527 [2024-11-26 20:56:31.334654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.527 [2024-11-26 20:56:31.442000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.095 [2024-11-26 20:56:31.814116] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:37.095 [2024-11-26 20:56:31.814190] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:37.095 [2024-11-26 20:56:31.975156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:31.975203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:37.095 [2024-11-26 20:56:31.975233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:37.095 [2024-11-26 20:56:31.975243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:31.975290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:31.975305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:37.095 [2024-11-26 20:56:31.975316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:37.095 [2024-11-26 20:56:31.975326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:31.975347] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:37.095 [2024-11-26 20:56:31.976329] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:37.095 [2024-11-26 20:56:31.976355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:31.976366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:37.095 [2024-11-26 20:56:31.976378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:28:37.095 [2024-11-26 20:56:31.976388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:31.977805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:37.095 [2024-11-26 20:56:31.996908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:31.996940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:37.095 [2024-11-26 20:56:31.996970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.104 ms 00:28:37.095 [2024-11-26 20:56:31.996980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:31.997044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:31.997057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:37.095 [2024-11-26 20:56:31.997068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:37.095 [2024-11-26 20:56:31.997078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.003735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:32.003757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:37.095 [2024-11-26 20:56:32.003768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.585 ms 00:28:37.095 [2024-11-26 20:56:32.003798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.003874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:32.003887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:37.095 [2024-11-26 20:56:32.003898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:37.095 [2024-11-26 20:56:32.003908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.003949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:32.003961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:37.095 [2024-11-26 20:56:32.003971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:37.095 [2024-11-26 20:56:32.003981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.004009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:37.095 [2024-11-26 20:56:32.008833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:32.008860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:37.095 [2024-11-26 20:56:32.008875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:28:37.095 [2024-11-26 20:56:32.008886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.008915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.095 [2024-11-26 20:56:32.008926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:37.095 [2024-11-26 20:56:32.008936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:37.095 [2024-11-26 20:56:32.008946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.095 [2024-11-26 20:56:32.008999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:37.095 [2024-11-26 20:56:32.009022] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:37.096 [2024-11-26 20:56:32.009058] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:37.096 [2024-11-26 20:56:32.009079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:37.096 [2024-11-26 20:56:32.009169] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:37.096 [2024-11-26 20:56:32.009182] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:37.096 [2024-11-26 20:56:32.009195] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:37.096 [2024-11-26 20:56:32.009208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009231] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:37.096 [2024-11-26 20:56:32.009240] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:37.096 [2024-11-26 20:56:32.009253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:37.096 [2024-11-26 20:56:32.009263] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:37.096 [2024-11-26 20:56:32.009273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.096 [2024-11-26 20:56:32.009283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:37.096 [2024-11-26 20:56:32.009293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:37.096 [2024-11-26 20:56:32.009302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.096 [2024-11-26 20:56:32.009373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.096 [2024-11-26 20:56:32.009384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:37.096 [2024-11-26 20:56:32.009394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:37.096 [2024-11-26 20:56:32.009403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.096 [2024-11-26 20:56:32.009499] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:37.096 [2024-11-26 20:56:32.009514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:37.096 [2024-11-26 20:56:32.009525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:37.096 [2024-11-26 20:56:32.009555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:37.096 [2024-11-26 20:56:32.009584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:37.096 [2024-11-26 20:56:32.009603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:37.096 [2024-11-26 20:56:32.009627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:37.096 [2024-11-26 20:56:32.009637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:37.096 [2024-11-26 20:56:32.009656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:37.096 [2024-11-26 20:56:32.009666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:37.096 [2024-11-26 20:56:32.009675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:37.096 [2024-11-26 20:56:32.009694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:37.096 [2024-11-26 20:56:32.009723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:37.096 [2024-11-26 20:56:32.009751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:37.096 [2024-11-26 20:56:32.009778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:37.096 [2024-11-26 20:56:32.009804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:37.096 [2024-11-26 20:56:32.009831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:37.096 [2024-11-26 20:56:32.009849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:37.096 [2024-11-26 20:56:32.009858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:37.096 [2024-11-26 20:56:32.009867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:37.096 [2024-11-26 20:56:32.009876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:37.096 [2024-11-26 20:56:32.009884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:37.096 [2024-11-26 20:56:32.009893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:37.096 [2024-11-26 20:56:32.009911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:37.096 [2024-11-26 20:56:32.009920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009930] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:37.096 [2024-11-26 20:56:32.009940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:37.096 [2024-11-26 20:56:32.009950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:37.096 [2024-11-26 20:56:32.009960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:37.096 [2024-11-26 20:56:32.009969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:37.096 [2024-11-26 20:56:32.009978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:37.096 [2024-11-26 20:56:32.009987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:37.096 [2024-11-26 20:56:32.009997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:37.096 [2024-11-26 20:56:32.010006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:37.096 [2024-11-26 20:56:32.010016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:37.096 [2024-11-26 20:56:32.010027] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:37.096 [2024-11-26 20:56:32.010039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:37.096 [2024-11-26 20:56:32.010064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:37.096 [2024-11-26 20:56:32.010074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:37.096 [2024-11-26 20:56:32.010085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:37.096 [2024-11-26 20:56:32.010095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:37.096 [2024-11-26 20:56:32.010105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:37.096 [2024-11-26 20:56:32.010115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:37.096 [2024-11-26 20:56:32.010125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:37.096 [2024-11-26 20:56:32.010135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:37.096 [2024-11-26 20:56:32.010146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:37.096 [2024-11-26 20:56:32.010197] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:37.096 [2024-11-26 20:56:32.010208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:37.096 [2024-11-26 20:56:32.010230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:37.096 [2024-11-26 20:56:32.010240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:37.096 [2024-11-26 20:56:32.010250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:37.096 [2024-11-26 20:56:32.010260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.096 [2024-11-26 20:56:32.010271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:37.096 [2024-11-26 20:56:32.010282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:28:37.096 [2024-11-26 20:56:32.010291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.096 [2024-11-26 20:56:32.049546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.096 [2024-11-26 20:56:32.049581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:37.096 [2024-11-26 20:56:32.049596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.207 ms 00:28:37.096 [2024-11-26 20:56:32.049622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.096 [2024-11-26 20:56:32.049708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.096 [2024-11-26 20:56:32.049719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:37.096 [2024-11-26 20:56:32.049730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:37.096 [2024-11-26 20:56:32.049740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.108993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.109027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:37.355 [2024-11-26 20:56:32.109055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.180 ms 00:28:37.355 [2024-11-26 20:56:32.109066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.109110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.109122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:37.355 [2024-11-26 20:56:32.109136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:37.355 [2024-11-26 20:56:32.109146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.109624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.109651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:37.355 [2024-11-26 20:56:32.109662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:28:37.355 [2024-11-26 20:56:32.109672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.109787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.109800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:37.355 [2024-11-26 20:56:32.109817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:37.355 [2024-11-26 20:56:32.109827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.128664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.128700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:37.355 [2024-11-26 20:56:32.128713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:28:37.355 [2024-11-26 20:56:32.128724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.148020] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:37.355 [2024-11-26 20:56:32.148067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:37.355 [2024-11-26 20:56:32.148097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.148108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:37.355 [2024-11-26 20:56:32.148120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.260 ms 00:28:37.355 [2024-11-26 20:56:32.148130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.355 [2024-11-26 20:56:32.177077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.355 [2024-11-26 20:56:32.177108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:37.356 [2024-11-26 20:56:32.177138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.905 ms 00:28:37.356 [2024-11-26 20:56:32.177148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.195187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.195220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:37.356 [2024-11-26 20:56:32.195233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:28:37.356 [2024-11-26 20:56:32.195243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.212886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.212916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:37.356 [2024-11-26 20:56:32.212944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.606 ms 00:28:37.356 [2024-11-26 20:56:32.212953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.213743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.213766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:37.356 [2024-11-26 20:56:32.213782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:28:37.356 [2024-11-26 20:56:32.213792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.299156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.299214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:37.356 [2024-11-26 20:56:32.299237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.331 ms 00:28:37.356 [2024-11-26 20:56:32.299248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.310034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:37.356 [2024-11-26 20:56:32.312885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.312914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:37.356 [2024-11-26 20:56:32.312928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:28:37.356 [2024-11-26 20:56:32.312937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.313030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.313043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:37.356 [2024-11-26 20:56:32.313059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:37.356 [2024-11-26 20:56:32.313068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.314667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.314699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:37.356 [2024-11-26 20:56:32.314712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:28:37.356 [2024-11-26 20:56:32.314723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.314759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.314771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:37.356 [2024-11-26 20:56:32.314782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:37.356 [2024-11-26 20:56:32.314792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.356 [2024-11-26 20:56:32.314834] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:37.356 [2024-11-26 20:56:32.314847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.356 [2024-11-26 20:56:32.314857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:37.356 [2024-11-26 20:56:32.314868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:37.356 [2024-11-26 20:56:32.314878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.615 [2024-11-26 20:56:32.350818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.615 [2024-11-26 20:56:32.350855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:37.615 [2024-11-26 20:56:32.350890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.919 ms 00:28:37.615 [2024-11-26 20:56:32.350901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.615 [2024-11-26 20:56:32.350978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.615 [2024-11-26 20:56:32.350991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:37.615 [2024-11-26 20:56:32.351002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:37.615 [2024-11-26 20:56:32.351014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.615 [2024-11-26 20:56:32.352208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.460 ms, result 0 00:28:38.988  [2024-11-26T20:56:34.917Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T20:56:35.851Z] Copying: 56/1024 [MB] (29 MBps) [2024-11-26T20:56:36.786Z] Copying: 85/1024 [MB] (29 MBps) [2024-11-26T20:56:37.721Z] Copying: 115/1024 [MB] (29 MBps) [2024-11-26T20:56:38.655Z] Copying: 145/1024 [MB] (29 MBps) [2024-11-26T20:56:39.588Z] Copying: 175/1024 [MB] (30 MBps) [2024-11-26T20:56:40.963Z] Copying: 205/1024 [MB] (29 MBps) [2024-11-26T20:56:41.901Z] Copying: 235/1024 [MB] (30 MBps) [2024-11-26T20:56:42.837Z] Copying: 266/1024 [MB] (30 MBps) [2024-11-26T20:56:43.773Z] Copying: 296/1024 [MB] (30 MBps) [2024-11-26T20:56:44.708Z] Copying: 326/1024 [MB] (30 MBps) [2024-11-26T20:56:45.645Z] Copying: 357/1024 [MB] (30 MBps) [2024-11-26T20:56:46.582Z] Copying: 388/1024 [MB] (30 MBps) [2024-11-26T20:56:47.959Z] Copying: 419/1024 [MB] (30 MBps) [2024-11-26T20:56:48.897Z] Copying: 449/1024 [MB] (30 MBps) [2024-11-26T20:56:49.833Z] Copying: 480/1024 [MB] (30 MBps) [2024-11-26T20:56:50.768Z] Copying: 511/1024 [MB] (30 MBps) [2024-11-26T20:56:51.703Z] Copying: 542/1024 [MB] (30 MBps) [2024-11-26T20:56:52.641Z] Copying: 571/1024 [MB] (29 MBps) [2024-11-26T20:56:53.577Z] Copying: 602/1024 [MB] (30 MBps) [2024-11-26T20:56:54.950Z] Copying: 632/1024 [MB] (30 MBps) [2024-11-26T20:56:55.885Z] Copying: 662/1024 [MB] (30 MBps) [2024-11-26T20:56:56.877Z] Copying: 693/1024 [MB] (30 MBps) [2024-11-26T20:56:57.810Z] Copying: 724/1024 [MB] (30 MBps) [2024-11-26T20:56:58.743Z] Copying: 755/1024 [MB] (30 MBps) [2024-11-26T20:56:59.678Z] Copying: 785/1024 [MB] (30 MBps) [2024-11-26T20:57:00.613Z] Copying: 816/1024 [MB] (31 MBps) [2024-11-26T20:57:01.995Z] Copying: 847/1024 [MB] (30 MBps) [2024-11-26T20:57:02.934Z] Copying: 877/1024 [MB] (30 MBps) [2024-11-26T20:57:03.870Z] Copying: 907/1024 [MB] (29 MBps) [2024-11-26T20:57:04.807Z] Copying: 937/1024 [MB] (30 MBps) [2024-11-26T20:57:05.744Z] Copying: 967/1024 [MB] (29 MBps) [2024-11-26T20:57:06.680Z] Copying: 997/1024 [MB] (29 MBps) [2024-11-26T20:57:06.938Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-26 20:57:06.871068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.871393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.944 [2024-11-26 20:57:06.871432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:11.944 [2024-11-26 20:57:06.871445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.944 [2024-11-26 20:57:06.871490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.944 [2024-11-26 20:57:06.876651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.876690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.944 [2024-11-26 20:57:06.876704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:29:11.944 [2024-11-26 20:57:06.876716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.944 [2024-11-26 20:57:06.876929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.876942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.944 [2024-11-26 20:57:06.876953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:29:11.944 [2024-11-26 20:57:06.876968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.944 [2024-11-26 20:57:06.881412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.881477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:11.944 [2024-11-26 20:57:06.881494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.426 ms 00:29:11.944 [2024-11-26 20:57:06.881506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.944 [2024-11-26 20:57:06.887570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.887625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:11.944 [2024-11-26 20:57:06.887639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:29:11.944 [2024-11-26 20:57:06.887657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.944 [2024-11-26 20:57:06.927022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.944 [2024-11-26 20:57:06.927067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:11.944 [2024-11-26 20:57:06.927082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.317 ms 00:29:11.944 [2024-11-26 20:57:06.927092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.203 [2024-11-26 20:57:06.947709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.203 [2024-11-26 20:57:06.947752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:12.203 [2024-11-26 20:57:06.947769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:29:12.203 [2024-11-26 20:57:06.947779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.203 [2024-11-26 20:57:07.060438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.203 [2024-11-26 20:57:07.060495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:12.203 [2024-11-26 20:57:07.060514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.606 ms 00:29:12.203 [2024-11-26 20:57:07.060527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.203 [2024-11-26 20:57:07.098642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.203 [2024-11-26 20:57:07.098685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:12.203 [2024-11-26 20:57:07.098700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.094 ms 00:29:12.203 [2024-11-26 20:57:07.098727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.203 [2024-11-26 20:57:07.135225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.203 [2024-11-26 20:57:07.135265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:12.203 [2024-11-26 20:57:07.135279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.445 ms 00:29:12.203 [2024-11-26 20:57:07.135306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.203 [2024-11-26 20:57:07.171174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.203 [2024-11-26 20:57:07.171213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:12.203 [2024-11-26 20:57:07.171227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.829 ms 00:29:12.203 [2024-11-26 20:57:07.171237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.463 [2024-11-26 20:57:07.206443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.463 [2024-11-26 20:57:07.206481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:12.463 [2024-11-26 20:57:07.206494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.122 ms 00:29:12.463 [2024-11-26 20:57:07.206519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.463 [2024-11-26 20:57:07.206556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:12.463 [2024-11-26 20:57:07.206573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:12.463 [2024-11-26 20:57:07.206586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.206996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:12.464 [2024-11-26 20:57:07.207341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:12.465 [2024-11-26 20:57:07.207678] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:12.465 [2024-11-26 20:57:07.207688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de9129c4-9c7d-4ebf-b0c4-26f94eaac199 00:29:12.465 [2024-11-26 20:57:07.207699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:12.465 [2024-11-26 20:57:07.207709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15040 00:29:12.465 [2024-11-26 20:57:07.207719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14080 00:29:12.465 [2024-11-26 20:57:07.207730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0682 00:29:12.465 [2024-11-26 20:57:07.207745] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:12.465 [2024-11-26 20:57:07.207766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:12.465 [2024-11-26 20:57:07.207776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:12.465 [2024-11-26 20:57:07.207785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:12.465 [2024-11-26 20:57:07.207794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:12.465 [2024-11-26 20:57:07.207810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.465 [2024-11-26 20:57:07.207821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:12.465 [2024-11-26 20:57:07.207831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:29:12.465 [2024-11-26 20:57:07.207841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.228142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.465 [2024-11-26 20:57:07.228176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:12.465 [2024-11-26 20:57:07.228211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.264 ms 00:29:12.465 [2024-11-26 20:57:07.228222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.228774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.465 [2024-11-26 20:57:07.228839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:12.465 [2024-11-26 20:57:07.228853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:29:12.465 [2024-11-26 20:57:07.228863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.280606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.465 [2024-11-26 20:57:07.280657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.465 [2024-11-26 20:57:07.280672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.465 [2024-11-26 20:57:07.280683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.280736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.465 [2024-11-26 20:57:07.280747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.465 [2024-11-26 20:57:07.280758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.465 [2024-11-26 20:57:07.280768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.280847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.465 [2024-11-26 20:57:07.280861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.465 [2024-11-26 20:57:07.280876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.465 [2024-11-26 20:57:07.280886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.280904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.465 [2024-11-26 20:57:07.280914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.465 [2024-11-26 20:57:07.280925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.465 [2024-11-26 20:57:07.280935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.465 [2024-11-26 20:57:07.404944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.465 [2024-11-26 20:57:07.405027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.465 [2024-11-26 20:57:07.405043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.465 [2024-11-26 20:57:07.405069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.505547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.505605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.725 [2024-11-26 20:57:07.505651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.505662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.505780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.505794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.725 [2024-11-26 20:57:07.505805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.505818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.505863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.505875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.725 [2024-11-26 20:57:07.505885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.505895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.506016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.506029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.725 [2024-11-26 20:57:07.506040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.506050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.506091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.506104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:12.725 [2024-11-26 20:57:07.506114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.506124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.506165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.506176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.725 [2024-11-26 20:57:07.506186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.506196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.506243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.725 [2024-11-26 20:57:07.506255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.725 [2024-11-26 20:57:07.506265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.725 [2024-11-26 20:57:07.506275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.725 [2024-11-26 20:57:07.506395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 635.291 ms, result 0 00:29:13.663 00:29:13.663 00:29:13.663 20:57:08 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:15.577 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79673 00:29:15.577 20:57:10 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79673 ']' 00:29:15.577 20:57:10 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79673 00:29:15.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79673) - No such process 00:29:15.577 20:57:10 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79673 is not found' 00:29:15.577 Process with pid 79673 is not found 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:15.577 Remove shared memory files 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:15.577 20:57:10 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:15.851 20:57:10 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:15.851 20:57:10 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:15.851 20:57:10 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:15.851 00:29:15.851 real 2m59.537s 00:29:15.851 user 2m47.088s 00:29:15.851 sys 0m14.288s 00:29:15.851 20:57:10 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.851 20:57:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:15.851 ************************************ 00:29:15.851 END TEST ftl_restore 00:29:15.851 ************************************ 00:29:15.851 20:57:10 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:15.851 20:57:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:15.851 20:57:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.851 20:57:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:15.851 ************************************ 00:29:15.851 START TEST ftl_dirty_shutdown 00:29:15.851 ************************************ 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:15.851 * Looking for test storage... 00:29:15.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:15.851 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.111 --rc genhtml_branch_coverage=1 00:29:16.111 --rc genhtml_function_coverage=1 00:29:16.111 --rc genhtml_legend=1 00:29:16.111 --rc geninfo_all_blocks=1 00:29:16.111 --rc geninfo_unexecuted_blocks=1 00:29:16.111 00:29:16.111 ' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.111 --rc genhtml_branch_coverage=1 00:29:16.111 --rc genhtml_function_coverage=1 00:29:16.111 --rc genhtml_legend=1 00:29:16.111 --rc geninfo_all_blocks=1 00:29:16.111 --rc geninfo_unexecuted_blocks=1 00:29:16.111 00:29:16.111 ' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.111 --rc genhtml_branch_coverage=1 00:29:16.111 --rc genhtml_function_coverage=1 00:29:16.111 --rc genhtml_legend=1 00:29:16.111 --rc geninfo_all_blocks=1 00:29:16.111 --rc geninfo_unexecuted_blocks=1 00:29:16.111 00:29:16.111 ' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.111 --rc genhtml_branch_coverage=1 00:29:16.111 --rc genhtml_function_coverage=1 00:29:16.111 --rc genhtml_legend=1 00:29:16.111 --rc geninfo_all_blocks=1 00:29:16.111 --rc geninfo_unexecuted_blocks=1 00:29:16.111 00:29:16.111 ' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81562 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81562 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81562 ']' 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.111 20:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.111 [2024-11-26 20:57:10.979861] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:16.112 [2024-11-26 20:57:10.980195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81562 ] 00:29:16.370 [2024-11-26 20:57:11.165815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.370 [2024-11-26 20:57:11.345091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.307 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:17.308 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:17.876 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:18.135 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:18.135 { 00:29:18.135 "name": "nvme0n1", 00:29:18.135 "aliases": [ 00:29:18.135 "e625366a-0590-4960-8d8d-27fea819d302" 00:29:18.135 ], 00:29:18.135 "product_name": "NVMe disk", 00:29:18.135 "block_size": 4096, 00:29:18.135 "num_blocks": 1310720, 00:29:18.135 "uuid": "e625366a-0590-4960-8d8d-27fea819d302", 00:29:18.135 "numa_id": -1, 00:29:18.135 "assigned_rate_limits": { 00:29:18.135 "rw_ios_per_sec": 0, 00:29:18.135 "rw_mbytes_per_sec": 0, 00:29:18.136 "r_mbytes_per_sec": 0, 00:29:18.136 "w_mbytes_per_sec": 0 00:29:18.136 }, 00:29:18.136 "claimed": true, 00:29:18.136 "claim_type": "read_many_write_one", 00:29:18.136 "zoned": false, 00:29:18.136 "supported_io_types": { 00:29:18.136 "read": true, 00:29:18.136 "write": true, 00:29:18.136 "unmap": true, 00:29:18.136 "flush": true, 00:29:18.136 "reset": true, 00:29:18.136 "nvme_admin": true, 00:29:18.136 "nvme_io": true, 00:29:18.136 "nvme_io_md": false, 00:29:18.136 "write_zeroes": true, 00:29:18.136 "zcopy": false, 00:29:18.136 "get_zone_info": false, 00:29:18.136 "zone_management": false, 00:29:18.136 "zone_append": false, 00:29:18.136 "compare": true, 00:29:18.136 "compare_and_write": false, 00:29:18.136 "abort": true, 00:29:18.136 "seek_hole": false, 00:29:18.136 "seek_data": false, 00:29:18.136 "copy": true, 00:29:18.136 "nvme_iov_md": false 00:29:18.136 }, 00:29:18.136 "driver_specific": { 00:29:18.136 "nvme": [ 00:29:18.136 { 00:29:18.136 "pci_address": "0000:00:11.0", 00:29:18.136 "trid": { 00:29:18.136 "trtype": "PCIe", 00:29:18.136 "traddr": "0000:00:11.0" 00:29:18.136 }, 00:29:18.136 "ctrlr_data": { 00:29:18.136 "cntlid": 0, 00:29:18.136 "vendor_id": "0x1b36", 00:29:18.136 "model_number": "QEMU NVMe Ctrl", 00:29:18.136 "serial_number": "12341", 00:29:18.136 "firmware_revision": "8.0.0", 00:29:18.136 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:18.136 "oacs": { 00:29:18.136 "security": 0, 00:29:18.136 "format": 1, 00:29:18.136 "firmware": 0, 00:29:18.136 "ns_manage": 1 00:29:18.136 }, 00:29:18.136 "multi_ctrlr": false, 00:29:18.136 "ana_reporting": false 00:29:18.136 }, 00:29:18.136 "vs": { 00:29:18.136 "nvme_version": "1.4" 00:29:18.136 }, 00:29:18.136 "ns_data": { 00:29:18.136 "id": 1, 00:29:18.136 "can_share": false 00:29:18.136 } 00:29:18.136 } 00:29:18.136 ], 00:29:18.136 "mp_policy": "active_passive" 00:29:18.136 } 00:29:18.136 } 00:29:18.136 ]' 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:18.136 20:57:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:18.136 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:18.136 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:18.395 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=703ee9f7-b9ba-4d26-93c5-5450400f5d1f 00:29:18.395 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:18.395 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 703ee9f7-b9ba-4d26-93c5-5450400f5d1f 00:29:18.654 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:18.915 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=77c45368-d26f-4b14-8074-48b034327011 00:29:18.915 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 77c45368-d26f-4b14-8074-48b034327011 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:19.174 20:57:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:19.175 20:57:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:19.434 { 00:29:19.434 "name": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:19.434 "aliases": [ 00:29:19.434 "lvs/nvme0n1p0" 00:29:19.434 ], 00:29:19.434 "product_name": "Logical Volume", 00:29:19.434 "block_size": 4096, 00:29:19.434 "num_blocks": 26476544, 00:29:19.434 "uuid": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:19.434 "assigned_rate_limits": { 00:29:19.434 "rw_ios_per_sec": 0, 00:29:19.434 "rw_mbytes_per_sec": 0, 00:29:19.434 "r_mbytes_per_sec": 0, 00:29:19.434 "w_mbytes_per_sec": 0 00:29:19.434 }, 00:29:19.434 "claimed": false, 00:29:19.434 "zoned": false, 00:29:19.434 "supported_io_types": { 00:29:19.434 "read": true, 00:29:19.434 "write": true, 00:29:19.434 "unmap": true, 00:29:19.434 "flush": false, 00:29:19.434 "reset": true, 00:29:19.434 "nvme_admin": false, 00:29:19.434 "nvme_io": false, 00:29:19.434 "nvme_io_md": false, 00:29:19.434 "write_zeroes": true, 00:29:19.434 "zcopy": false, 00:29:19.434 "get_zone_info": false, 00:29:19.434 "zone_management": false, 00:29:19.434 "zone_append": false, 00:29:19.434 "compare": false, 00:29:19.434 "compare_and_write": false, 00:29:19.434 "abort": false, 00:29:19.434 "seek_hole": true, 00:29:19.434 "seek_data": true, 00:29:19.434 "copy": false, 00:29:19.434 "nvme_iov_md": false 00:29:19.434 }, 00:29:19.434 "driver_specific": { 00:29:19.434 "lvol": { 00:29:19.434 "lvol_store_uuid": "77c45368-d26f-4b14-8074-48b034327011", 00:29:19.434 "base_bdev": "nvme0n1", 00:29:19.434 "thin_provision": true, 00:29:19.434 "num_allocated_clusters": 0, 00:29:19.434 "snapshot": false, 00:29:19.434 "clone": false, 00:29:19.434 "esnap_clone": false 00:29:19.434 } 00:29:19.434 } 00:29:19.434 } 00:29:19.434 ]' 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:19.434 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:19.693 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:19.951 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:19.951 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:19.951 { 00:29:19.951 "name": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:19.951 "aliases": [ 00:29:19.951 "lvs/nvme0n1p0" 00:29:19.951 ], 00:29:19.951 "product_name": "Logical Volume", 00:29:19.951 "block_size": 4096, 00:29:19.951 "num_blocks": 26476544, 00:29:19.951 "uuid": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:19.951 "assigned_rate_limits": { 00:29:19.951 "rw_ios_per_sec": 0, 00:29:19.951 "rw_mbytes_per_sec": 0, 00:29:19.951 "r_mbytes_per_sec": 0, 00:29:19.951 "w_mbytes_per_sec": 0 00:29:19.951 }, 00:29:19.951 "claimed": false, 00:29:19.951 "zoned": false, 00:29:19.951 "supported_io_types": { 00:29:19.951 "read": true, 00:29:19.951 "write": true, 00:29:19.951 "unmap": true, 00:29:19.951 "flush": false, 00:29:19.951 "reset": true, 00:29:19.951 "nvme_admin": false, 00:29:19.951 "nvme_io": false, 00:29:19.951 "nvme_io_md": false, 00:29:19.951 "write_zeroes": true, 00:29:19.951 "zcopy": false, 00:29:19.951 "get_zone_info": false, 00:29:19.951 "zone_management": false, 00:29:19.951 "zone_append": false, 00:29:19.951 "compare": false, 00:29:19.951 "compare_and_write": false, 00:29:19.951 "abort": false, 00:29:19.951 "seek_hole": true, 00:29:19.951 "seek_data": true, 00:29:19.951 "copy": false, 00:29:19.951 "nvme_iov_md": false 00:29:19.951 }, 00:29:19.951 "driver_specific": { 00:29:19.951 "lvol": { 00:29:19.951 "lvol_store_uuid": "77c45368-d26f-4b14-8074-48b034327011", 00:29:19.951 "base_bdev": "nvme0n1", 00:29:19.951 "thin_provision": true, 00:29:19.951 "num_allocated_clusters": 0, 00:29:19.951 "snapshot": false, 00:29:19.951 "clone": false, 00:29:19.951 "esnap_clone": false 00:29:19.951 } 00:29:19.951 } 00:29:19.951 } 00:29:19.951 ]' 00:29:19.951 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:19.951 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:19.951 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:20.210 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:20.210 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:20.210 20:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:20.210 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:20.210 20:57:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:20.210 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f7965b84-5fc2-4021-83ac-66d07ea9b455 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:20.470 { 00:29:20.470 "name": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:20.470 "aliases": [ 00:29:20.470 "lvs/nvme0n1p0" 00:29:20.470 ], 00:29:20.470 "product_name": "Logical Volume", 00:29:20.470 "block_size": 4096, 00:29:20.470 "num_blocks": 26476544, 00:29:20.470 "uuid": "f7965b84-5fc2-4021-83ac-66d07ea9b455", 00:29:20.470 "assigned_rate_limits": { 00:29:20.470 "rw_ios_per_sec": 0, 00:29:20.470 "rw_mbytes_per_sec": 0, 00:29:20.470 "r_mbytes_per_sec": 0, 00:29:20.470 "w_mbytes_per_sec": 0 00:29:20.470 }, 00:29:20.470 "claimed": false, 00:29:20.470 "zoned": false, 00:29:20.470 "supported_io_types": { 00:29:20.470 "read": true, 00:29:20.470 "write": true, 00:29:20.470 "unmap": true, 00:29:20.470 "flush": false, 00:29:20.470 "reset": true, 00:29:20.470 "nvme_admin": false, 00:29:20.470 "nvme_io": false, 00:29:20.470 "nvme_io_md": false, 00:29:20.470 "write_zeroes": true, 00:29:20.470 "zcopy": false, 00:29:20.470 "get_zone_info": false, 00:29:20.470 "zone_management": false, 00:29:20.470 "zone_append": false, 00:29:20.470 "compare": false, 00:29:20.470 "compare_and_write": false, 00:29:20.470 "abort": false, 00:29:20.470 "seek_hole": true, 00:29:20.470 "seek_data": true, 00:29:20.470 "copy": false, 00:29:20.470 "nvme_iov_md": false 00:29:20.470 }, 00:29:20.470 "driver_specific": { 00:29:20.470 "lvol": { 00:29:20.470 "lvol_store_uuid": "77c45368-d26f-4b14-8074-48b034327011", 00:29:20.470 "base_bdev": "nvme0n1", 00:29:20.470 "thin_provision": true, 00:29:20.470 "num_allocated_clusters": 0, 00:29:20.470 "snapshot": false, 00:29:20.470 "clone": false, 00:29:20.470 "esnap_clone": false 00:29:20.470 } 00:29:20.470 } 00:29:20.470 } 00:29:20.470 ]' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f7965b84-5fc2-4021-83ac-66d07ea9b455 --l2p_dram_limit 10' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:20.470 20:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f7965b84-5fc2-4021-83ac-66d07ea9b455 --l2p_dram_limit 10 -c nvc0n1p0 00:29:20.730 [2024-11-26 20:57:15.630261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.630312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:20.730 [2024-11-26 20:57:15.630332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:20.730 [2024-11-26 20:57:15.630343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.630417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.630431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.730 [2024-11-26 20:57:15.630445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:20.730 [2024-11-26 20:57:15.630455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.630480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:20.730 [2024-11-26 20:57:15.631672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:20.730 [2024-11-26 20:57:15.631834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.631851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.730 [2024-11-26 20:57:15.631865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:29:20.730 [2024-11-26 20:57:15.631875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.632016] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e81524cd-e20b-431d-be2e-32d90f4abaa5 00:29:20.730 [2024-11-26 20:57:15.633414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.633452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:20.730 [2024-11-26 20:57:15.633465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:20.730 [2024-11-26 20:57:15.633480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.640821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.640861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.730 [2024-11-26 20:57:15.640874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.298 ms 00:29:20.730 [2024-11-26 20:57:15.640893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.641008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.641026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.730 [2024-11-26 20:57:15.641038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:20.730 [2024-11-26 20:57:15.641055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.641119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.641134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:20.730 [2024-11-26 20:57:15.641148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:20.730 [2024-11-26 20:57:15.641161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.641189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:20.730 [2024-11-26 20:57:15.646517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.646550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.730 [2024-11-26 20:57:15.646567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.334 ms 00:29:20.730 [2024-11-26 20:57:15.646577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.646630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.646642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:20.730 [2024-11-26 20:57:15.646656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:20.730 [2024-11-26 20:57:15.646666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.646715] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:20.730 [2024-11-26 20:57:15.646861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:20.730 [2024-11-26 20:57:15.646881] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:20.730 [2024-11-26 20:57:15.646895] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:20.730 [2024-11-26 20:57:15.646912] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:20.730 [2024-11-26 20:57:15.646924] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:20.730 [2024-11-26 20:57:15.646938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:20.730 [2024-11-26 20:57:15.646950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:20.730 [2024-11-26 20:57:15.646962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:20.730 [2024-11-26 20:57:15.646972] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:20.730 [2024-11-26 20:57:15.646986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.647007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:20.730 [2024-11-26 20:57:15.647020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:29:20.730 [2024-11-26 20:57:15.647030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.647109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.730 [2024-11-26 20:57:15.647120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:20.730 [2024-11-26 20:57:15.647133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:20.730 [2024-11-26 20:57:15.647143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.730 [2024-11-26 20:57:15.647243] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:20.730 [2024-11-26 20:57:15.647256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:20.730 [2024-11-26 20:57:15.647269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:20.730 [2024-11-26 20:57:15.647302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:20.730 [2024-11-26 20:57:15.647335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.730 [2024-11-26 20:57:15.647356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:20.730 [2024-11-26 20:57:15.647366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:20.730 [2024-11-26 20:57:15.647378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.730 [2024-11-26 20:57:15.647387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:20.730 [2024-11-26 20:57:15.647399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:20.730 [2024-11-26 20:57:15.647408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:20.730 [2024-11-26 20:57:15.647431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:20.730 [2024-11-26 20:57:15.647466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:20.730 [2024-11-26 20:57:15.647499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:20.730 [2024-11-26 20:57:15.647533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:20.730 [2024-11-26 20:57:15.647563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.730 [2024-11-26 20:57:15.647583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:20.730 [2024-11-26 20:57:15.647607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.730 [2024-11-26 20:57:15.647641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:20.730 [2024-11-26 20:57:15.647650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:20.730 [2024-11-26 20:57:15.647662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.730 [2024-11-26 20:57:15.647671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:20.730 [2024-11-26 20:57:15.647683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:20.730 [2024-11-26 20:57:15.647692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.730 [2024-11-26 20:57:15.647704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:20.730 [2024-11-26 20:57:15.647713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:20.731 [2024-11-26 20:57:15.647725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.731 [2024-11-26 20:57:15.647734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:20.731 [2024-11-26 20:57:15.647747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:20.731 [2024-11-26 20:57:15.647756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.731 [2024-11-26 20:57:15.647770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.731 [2024-11-26 20:57:15.647780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:20.731 [2024-11-26 20:57:15.647795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:20.731 [2024-11-26 20:57:15.647804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:20.731 [2024-11-26 20:57:15.647817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:20.731 [2024-11-26 20:57:15.647826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:20.731 [2024-11-26 20:57:15.647838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:20.731 [2024-11-26 20:57:15.647852] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:20.731 [2024-11-26 20:57:15.647870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.647883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:20.731 [2024-11-26 20:57:15.647897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:20.731 [2024-11-26 20:57:15.647907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:20.731 [2024-11-26 20:57:15.647920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:20.731 [2024-11-26 20:57:15.647931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:20.731 [2024-11-26 20:57:15.647944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:20.731 [2024-11-26 20:57:15.647954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:20.731 [2024-11-26 20:57:15.647967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:20.731 [2024-11-26 20:57:15.647977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:20.731 [2024-11-26 20:57:15.647993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:20.731 [2024-11-26 20:57:15.648051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:20.731 [2024-11-26 20:57:15.648066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:20.731 [2024-11-26 20:57:15.648089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:20.731 [2024-11-26 20:57:15.648100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:20.731 [2024-11-26 20:57:15.648113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:20.731 [2024-11-26 20:57:15.648124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.731 [2024-11-26 20:57:15.648137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:20.731 [2024-11-26 20:57:15.648147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:29:20.731 [2024-11-26 20:57:15.648159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.731 [2024-11-26 20:57:15.648201] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:20.731 [2024-11-26 20:57:15.648219] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:23.260 [2024-11-26 20:57:18.176349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.260 [2024-11-26 20:57:18.176413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:23.260 [2024-11-26 20:57:18.176431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2528.131 ms 00:29:23.260 [2024-11-26 20:57:18.176445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.260 [2024-11-26 20:57:18.214380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.260 [2024-11-26 20:57:18.214433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:23.260 [2024-11-26 20:57:18.214458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.616 ms 00:29:23.260 [2024-11-26 20:57:18.214472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.260 [2024-11-26 20:57:18.214651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.260 [2024-11-26 20:57:18.214669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:23.260 [2024-11-26 20:57:18.214682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:23.260 [2024-11-26 20:57:18.214701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.261021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.261222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:23.519 [2024-11-26 20:57:18.261245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.273 ms 00:29:23.519 [2024-11-26 20:57:18.261261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.261314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.261329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:23.519 [2024-11-26 20:57:18.261340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:23.519 [2024-11-26 20:57:18.261363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.261880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.261904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:23.519 [2024-11-26 20:57:18.261915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:29:23.519 [2024-11-26 20:57:18.261928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.262031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.262049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:23.519 [2024-11-26 20:57:18.262060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:23.519 [2024-11-26 20:57:18.262075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.282095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.282136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:23.519 [2024-11-26 20:57:18.282149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.000 ms 00:29:23.519 [2024-11-26 20:57:18.282162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.306729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:23.519 [2024-11-26 20:57:18.309953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.309983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:23.519 [2024-11-26 20:57:18.310000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.685 ms 00:29:23.519 [2024-11-26 20:57:18.310011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.384605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.384676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:23.519 [2024-11-26 20:57:18.384699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.548 ms 00:29:23.519 [2024-11-26 20:57:18.384712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.384936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.384953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:23.519 [2024-11-26 20:57:18.384973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:29:23.519 [2024-11-26 20:57:18.384985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.421106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.421149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:23.519 [2024-11-26 20:57:18.421167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.041 ms 00:29:23.519 [2024-11-26 20:57:18.421177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.456563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.456597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:23.519 [2024-11-26 20:57:18.456642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.339 ms 00:29:23.519 [2024-11-26 20:57:18.456653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.519 [2024-11-26 20:57:18.457398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.519 [2024-11-26 20:57:18.457420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:23.519 [2024-11-26 20:57:18.457437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:29:23.519 [2024-11-26 20:57:18.457448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.552992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.553199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:23.779 [2024-11-26 20:57:18.553248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.485 ms 00:29:23.779 [2024-11-26 20:57:18.553260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.590013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.590051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:23.779 [2024-11-26 20:57:18.590084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.666 ms 00:29:23.779 [2024-11-26 20:57:18.590095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.625650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.625684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:23.779 [2024-11-26 20:57:18.625699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.508 ms 00:29:23.779 [2024-11-26 20:57:18.625709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.661206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.661351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:23.779 [2024-11-26 20:57:18.661394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.453 ms 00:29:23.779 [2024-11-26 20:57:18.661404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.661490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.661503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:23.779 [2024-11-26 20:57:18.661521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:23.779 [2024-11-26 20:57:18.661531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.661657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.779 [2024-11-26 20:57:18.661674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:23.779 [2024-11-26 20:57:18.661688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:23.779 [2024-11-26 20:57:18.661698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.779 [2024-11-26 20:57:18.662773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3031.991 ms, result 0 00:29:23.779 { 00:29:23.779 "name": "ftl0", 00:29:23.779 "uuid": "e81524cd-e20b-431d-be2e-32d90f4abaa5" 00:29:23.779 } 00:29:23.779 20:57:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:23.779 20:57:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:24.042 20:57:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:24.042 20:57:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:24.042 20:57:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:24.302 /dev/nbd0 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:24.302 1+0 records in 00:29:24.302 1+0 records out 00:29:24.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279764 s, 14.6 MB/s 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:24.302 20:57:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:24.302 [2024-11-26 20:57:19.275527] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:24.302 [2024-11-26 20:57:19.275667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81706 ] 00:29:24.562 [2024-11-26 20:57:19.453585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.821 [2024-11-26 20:57:19.618936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.209  [2024-11-26T20:57:22.159Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-26T20:57:23.096Z] Copying: 395/1024 [MB] (196 MBps) [2024-11-26T20:57:24.030Z] Copying: 587/1024 [MB] (192 MBps) [2024-11-26T20:57:24.967Z] Copying: 789/1024 [MB] (202 MBps) [2024-11-26T20:57:25.226Z] Copying: 989/1024 [MB] (199 MBps) [2024-11-26T20:57:26.604Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:29:31.610 00:29:31.610 20:57:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:33.512 20:57:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:33.512 [2024-11-26 20:57:28.174441] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:29:33.512 [2024-11-26 20:57:28.174605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:29:33.512 [2024-11-26 20:57:28.375141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.771 [2024-11-26 20:57:28.518218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.145  [2024-11-26T20:57:31.074Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-26T20:57:32.011Z] Copying: 37/1024 [MB] (18 MBps) [2024-11-26T20:57:32.947Z] Copying: 56/1024 [MB] (18 MBps) [2024-11-26T20:57:33.882Z] Copying: 74/1024 [MB] (18 MBps) [2024-11-26T20:57:35.259Z] Copying: 92/1024 [MB] (17 MBps) [2024-11-26T20:57:36.196Z] Copying: 110/1024 [MB] (18 MBps) [2024-11-26T20:57:37.132Z] Copying: 129/1024 [MB] (18 MBps) [2024-11-26T20:57:38.067Z] Copying: 148/1024 [MB] (18 MBps) [2024-11-26T20:57:39.003Z] Copying: 166/1024 [MB] (18 MBps) [2024-11-26T20:57:39.939Z] Copying: 183/1024 [MB] (16 MBps) [2024-11-26T20:57:40.873Z] Copying: 201/1024 [MB] (17 MBps) [2024-11-26T20:57:41.884Z] Copying: 220/1024 [MB] (18 MBps) [2024-11-26T20:57:43.258Z] Copying: 238/1024 [MB] (18 MBps) [2024-11-26T20:57:44.195Z] Copying: 255/1024 [MB] (17 MBps) [2024-11-26T20:57:45.127Z] Copying: 272/1024 [MB] (17 MBps) [2024-11-26T20:57:46.060Z] Copying: 289/1024 [MB] (16 MBps) [2024-11-26T20:57:46.993Z] Copying: 306/1024 [MB] (16 MBps) [2024-11-26T20:57:47.926Z] Copying: 323/1024 [MB] (17 MBps) [2024-11-26T20:57:48.861Z] Copying: 340/1024 [MB] (17 MBps) [2024-11-26T20:57:50.273Z] Copying: 358/1024 [MB] (17 MBps) [2024-11-26T20:57:51.208Z] Copying: 375/1024 [MB] (17 MBps) [2024-11-26T20:57:52.141Z] Copying: 392/1024 [MB] (17 MBps) [2024-11-26T20:57:53.073Z] Copying: 410/1024 [MB] (17 MBps) [2024-11-26T20:57:54.009Z] Copying: 427/1024 [MB] (17 MBps) [2024-11-26T20:57:54.945Z] Copying: 445/1024 [MB] (17 MBps) [2024-11-26T20:57:55.881Z] Copying: 462/1024 [MB] (17 MBps) [2024-11-26T20:57:57.255Z] Copying: 479/1024 [MB] (16 MBps) [2024-11-26T20:57:58.190Z] Copying: 496/1024 [MB] (16 MBps) [2024-11-26T20:57:59.125Z] Copying: 513/1024 [MB] (17 MBps) [2024-11-26T20:58:00.056Z] Copying: 530/1024 [MB] (17 MBps) [2024-11-26T20:58:00.991Z] Copying: 547/1024 [MB] (17 MBps) [2024-11-26T20:58:01.927Z] Copying: 564/1024 [MB] (17 MBps) [2024-11-26T20:58:02.863Z] Copying: 580/1024 [MB] (16 MBps) [2024-11-26T20:58:04.245Z] Copying: 598/1024 [MB] (17 MBps) [2024-11-26T20:58:05.178Z] Copying: 615/1024 [MB] (17 MBps) [2024-11-26T20:58:06.114Z] Copying: 632/1024 [MB] (17 MBps) [2024-11-26T20:58:07.047Z] Copying: 649/1024 [MB] (17 MBps) [2024-11-26T20:58:07.982Z] Copying: 667/1024 [MB] (17 MBps) [2024-11-26T20:58:08.916Z] Copying: 685/1024 [MB] (17 MBps) [2024-11-26T20:58:09.850Z] Copying: 702/1024 [MB] (17 MBps) [2024-11-26T20:58:11.225Z] Copying: 720/1024 [MB] (17 MBps) [2024-11-26T20:58:12.159Z] Copying: 737/1024 [MB] (17 MBps) [2024-11-26T20:58:13.129Z] Copying: 755/1024 [MB] (17 MBps) [2024-11-26T20:58:14.115Z] Copying: 772/1024 [MB] (16 MBps) [2024-11-26T20:58:15.050Z] Copying: 789/1024 [MB] (17 MBps) [2024-11-26T20:58:15.986Z] Copying: 806/1024 [MB] (16 MBps) [2024-11-26T20:58:16.920Z] Copying: 823/1024 [MB] (16 MBps) [2024-11-26T20:58:17.853Z] Copying: 840/1024 [MB] (17 MBps) [2024-11-26T20:58:19.229Z] Copying: 857/1024 [MB] (17 MBps) [2024-11-26T20:58:20.163Z] Copying: 874/1024 [MB] (17 MBps) [2024-11-26T20:58:21.098Z] Copying: 891/1024 [MB] (16 MBps) [2024-11-26T20:58:22.033Z] Copying: 908/1024 [MB] (17 MBps) [2024-11-26T20:58:22.967Z] Copying: 927/1024 [MB] (18 MBps) [2024-11-26T20:58:23.903Z] Copying: 945/1024 [MB] (18 MBps) [2024-11-26T20:58:25.279Z] Copying: 963/1024 [MB] (17 MBps) [2024-11-26T20:58:25.847Z] Copying: 981/1024 [MB] (18 MBps) [2024-11-26T20:58:27.225Z] Copying: 999/1024 [MB] (18 MBps) [2024-11-26T20:58:27.225Z] Copying: 1017/1024 [MB] (18 MBps) [2024-11-26T20:58:28.602Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:30:33.608 00:30:33.608 20:58:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:33.608 20:58:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:33.608 20:58:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:33.868 [2024-11-26 20:58:28.755626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.755685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:33.868 [2024-11-26 20:58:28.755703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:33.868 [2024-11-26 20:58:28.755721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.755749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:33.868 [2024-11-26 20:58:28.760257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.760413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:33.868 [2024-11-26 20:58:28.760443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:30:33.868 [2024-11-26 20:58:28.760454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.762472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.762511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:33.868 [2024-11-26 20:58:28.762527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:30:33.868 [2024-11-26 20:58:28.762538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.777717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.777754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:33.868 [2024-11-26 20:58:28.777772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.149 ms 00:30:33.868 [2024-11-26 20:58:28.777782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.782742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.782773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:33.868 [2024-11-26 20:58:28.782795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:30:33.868 [2024-11-26 20:58:28.782805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.818436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.818471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:33.868 [2024-11-26 20:58:28.818488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.548 ms 00:30:33.868 [2024-11-26 20:58:28.818498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.839777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.839815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:33.868 [2024-11-26 20:58:28.839834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.231 ms 00:30:33.868 [2024-11-26 20:58:28.839845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.868 [2024-11-26 20:58:28.839990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.868 [2024-11-26 20:58:28.840004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:33.868 [2024-11-26 20:58:28.840017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:33.868 [2024-11-26 20:58:28.840027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.128 [2024-11-26 20:58:28.876278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.128 [2024-11-26 20:58:28.876313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:34.128 [2024-11-26 20:58:28.876330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.227 ms 00:30:34.128 [2024-11-26 20:58:28.876340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.128 [2024-11-26 20:58:28.911770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.128 [2024-11-26 20:58:28.911805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:34.128 [2024-11-26 20:58:28.911821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.384 ms 00:30:34.128 [2024-11-26 20:58:28.911831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.128 [2024-11-26 20:58:28.947098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.128 [2024-11-26 20:58:28.947132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:34.128 [2024-11-26 20:58:28.947148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.218 ms 00:30:34.128 [2024-11-26 20:58:28.947157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.128 [2024-11-26 20:58:28.981856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.128 [2024-11-26 20:58:28.981889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:34.128 [2024-11-26 20:58:28.981904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.601 ms 00:30:34.128 [2024-11-26 20:58:28.981914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.128 [2024-11-26 20:58:28.981954] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:34.129 [2024-11-26 20:58:28.981971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.981986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.981997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.982989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.983001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.983011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.983026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:34.129 [2024-11-26 20:58:28.983035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:34.130 [2024-11-26 20:58:28.983179] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:34.130 [2024-11-26 20:58:28.983191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e81524cd-e20b-431d-be2e-32d90f4abaa5 00:30:34.130 [2024-11-26 20:58:28.983201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:34.130 [2024-11-26 20:58:28.983215] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:34.130 [2024-11-26 20:58:28.983227] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:34.130 [2024-11-26 20:58:28.983240] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:34.130 [2024-11-26 20:58:28.983250] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:34.130 [2024-11-26 20:58:28.983262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:34.130 [2024-11-26 20:58:28.983271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:34.130 [2024-11-26 20:58:28.983282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:34.130 [2024-11-26 20:58:28.983291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:34.130 [2024-11-26 20:58:28.983303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.130 [2024-11-26 20:58:28.983313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:34.130 [2024-11-26 20:58:28.983325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:30:34.130 [2024-11-26 20:58:28.983335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.002510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.130 [2024-11-26 20:58:29.002543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:34.130 [2024-11-26 20:58:29.002558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.123 ms 00:30:34.130 [2024-11-26 20:58:29.002568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.003185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.130 [2024-11-26 20:58:29.003203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:34.130 [2024-11-26 20:58:29.003217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:30:34.130 [2024-11-26 20:58:29.003227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.066467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.130 [2024-11-26 20:58:29.066501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:34.130 [2024-11-26 20:58:29.066516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.130 [2024-11-26 20:58:29.066526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.066584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.130 [2024-11-26 20:58:29.066595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:34.130 [2024-11-26 20:58:29.066609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.130 [2024-11-26 20:58:29.066631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.066748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.130 [2024-11-26 20:58:29.066766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:34.130 [2024-11-26 20:58:29.066779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.130 [2024-11-26 20:58:29.066790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.130 [2024-11-26 20:58:29.066815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.130 [2024-11-26 20:58:29.066825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:34.130 [2024-11-26 20:58:29.066838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.130 [2024-11-26 20:58:29.066849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.191331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.191537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:34.388 [2024-11-26 20:58:29.191566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.388 [2024-11-26 20:58:29.191587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.293816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.293869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:34.388 [2024-11-26 20:58:29.293886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.388 [2024-11-26 20:58:29.293897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.294022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.294035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:34.388 [2024-11-26 20:58:29.294051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.388 [2024-11-26 20:58:29.294061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.294142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.294155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:34.388 [2024-11-26 20:58:29.294168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.388 [2024-11-26 20:58:29.294178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.294297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.294310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:34.388 [2024-11-26 20:58:29.294323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.388 [2024-11-26 20:58:29.294336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.388 [2024-11-26 20:58:29.294375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.388 [2024-11-26 20:58:29.294387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:34.388 [2024-11-26 20:58:29.294400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.389 [2024-11-26 20:58:29.294410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.389 [2024-11-26 20:58:29.294453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.389 [2024-11-26 20:58:29.294464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:34.389 [2024-11-26 20:58:29.294476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.389 [2024-11-26 20:58:29.294490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.389 [2024-11-26 20:58:29.294541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:34.389 [2024-11-26 20:58:29.294552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:34.389 [2024-11-26 20:58:29.294565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:34.389 [2024-11-26 20:58:29.294575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.389 [2024-11-26 20:58:29.294753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.109 ms, result 0 00:30:34.389 true 00:30:34.389 20:58:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81562 00:30:34.389 20:58:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81562 00:30:34.389 20:58:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:34.647 [2024-11-26 20:58:29.459636] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:34.647 [2024-11-26 20:58:29.459992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82449 ] 00:30:34.906 [2024-11-26 20:58:29.655223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.906 [2024-11-26 20:58:29.766708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.284  [2024-11-26T20:58:32.216Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-26T20:58:33.153Z] Copying: 414/1024 [MB] (208 MBps) [2024-11-26T20:58:34.088Z] Copying: 622/1024 [MB] (208 MBps) [2024-11-26T20:58:35.463Z] Copying: 828/1024 [MB] (205 MBps) [2024-11-26T20:58:36.400Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:30:41.406 00:30:41.406 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81562 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:41.406 20:58:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:41.406 [2024-11-26 20:58:36.297260] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:30:41.406 [2024-11-26 20:58:36.297457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82519 ] 00:30:41.665 [2024-11-26 20:58:36.486537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.923 [2024-11-26 20:58:36.661746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.183 [2024-11-26 20:58:37.019477] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:42.183 [2024-11-26 20:58:37.019798] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:42.183 [2024-11-26 20:58:37.085927] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:42.183 [2024-11-26 20:58:37.086270] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:42.183 [2024-11-26 20:58:37.086521] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:42.445 [2024-11-26 20:58:37.372118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.445 [2024-11-26 20:58:37.372173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:42.445 [2024-11-26 20:58:37.372205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:42.445 [2024-11-26 20:58:37.372220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.445 [2024-11-26 20:58:37.372278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.445 [2024-11-26 20:58:37.372291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:42.445 [2024-11-26 20:58:37.372303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:42.445 [2024-11-26 20:58:37.372313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.445 [2024-11-26 20:58:37.372334] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:42.445 [2024-11-26 20:58:37.373319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:42.445 [2024-11-26 20:58:37.373341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.445 [2024-11-26 20:58:37.373353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:42.445 [2024-11-26 20:58:37.373364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:30:42.446 [2024-11-26 20:58:37.373374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.374874] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:42.446 [2024-11-26 20:58:37.394209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.394245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:42.446 [2024-11-26 20:58:37.394259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.336 ms 00:30:42.446 [2024-11-26 20:58:37.394286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.394369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.394387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:42.446 [2024-11-26 20:58:37.394399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:42.446 [2024-11-26 20:58:37.394409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.401219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.401388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:42.446 [2024-11-26 20:58:37.401424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:30:42.446 [2024-11-26 20:58:37.401436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.401526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.401539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:42.446 [2024-11-26 20:58:37.401550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:42.446 [2024-11-26 20:58:37.401561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.401610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.401622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:42.446 [2024-11-26 20:58:37.401655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:42.446 [2024-11-26 20:58:37.401665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.401692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:42.446 [2024-11-26 20:58:37.406390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.406421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:42.446 [2024-11-26 20:58:37.406433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:30:42.446 [2024-11-26 20:58:37.406458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.406488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.406499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:42.446 [2024-11-26 20:58:37.406509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:42.446 [2024-11-26 20:58:37.406519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.406579] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:42.446 [2024-11-26 20:58:37.406604] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:42.446 [2024-11-26 20:58:37.406655] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:42.446 [2024-11-26 20:58:37.406674] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:42.446 [2024-11-26 20:58:37.406764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:42.446 [2024-11-26 20:58:37.406794] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:42.446 [2024-11-26 20:58:37.406807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:42.446 [2024-11-26 20:58:37.406837] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:42.446 [2024-11-26 20:58:37.406849] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:42.446 [2024-11-26 20:58:37.406860] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:42.446 [2024-11-26 20:58:37.406871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:42.446 [2024-11-26 20:58:37.406881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:42.446 [2024-11-26 20:58:37.406892] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:42.446 [2024-11-26 20:58:37.406903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.406913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:42.446 [2024-11-26 20:58:37.406924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:30:42.446 [2024-11-26 20:58:37.406934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.407004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.446 [2024-11-26 20:58:37.407019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:42.446 [2024-11-26 20:58:37.407029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:42.446 [2024-11-26 20:58:37.407039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.446 [2024-11-26 20:58:37.407150] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:42.446 [2024-11-26 20:58:37.407165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:42.446 [2024-11-26 20:58:37.407175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:42.446 [2024-11-26 20:58:37.407206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:42.446 [2024-11-26 20:58:37.407235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:42.446 [2024-11-26 20:58:37.407267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:42.446 [2024-11-26 20:58:37.407277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:42.446 [2024-11-26 20:58:37.407286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:42.446 [2024-11-26 20:58:37.407295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:42.446 [2024-11-26 20:58:37.407305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:42.446 [2024-11-26 20:58:37.407314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:42.446 [2024-11-26 20:58:37.407333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:42.446 [2024-11-26 20:58:37.407363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:42.446 [2024-11-26 20:58:37.407390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:42.446 [2024-11-26 20:58:37.407418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:42.446 [2024-11-26 20:58:37.407446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:42.446 [2024-11-26 20:58:37.407473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:42.446 [2024-11-26 20:58:37.407493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:42.446 [2024-11-26 20:58:37.407502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:42.446 [2024-11-26 20:58:37.407511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:42.446 [2024-11-26 20:58:37.407521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:42.446 [2024-11-26 20:58:37.407530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:42.446 [2024-11-26 20:58:37.407539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:42.446 [2024-11-26 20:58:37.407558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:42.446 [2024-11-26 20:58:37.407569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407588] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:42.446 [2024-11-26 20:58:37.407599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:42.446 [2024-11-26 20:58:37.407624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:42.446 [2024-11-26 20:58:37.407636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:42.446 [2024-11-26 20:58:37.407646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:42.446 [2024-11-26 20:58:37.407656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:42.446 [2024-11-26 20:58:37.407666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:42.447 [2024-11-26 20:58:37.407675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:42.447 [2024-11-26 20:58:37.407684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:42.447 [2024-11-26 20:58:37.407694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:42.447 [2024-11-26 20:58:37.407705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:42.447 [2024-11-26 20:58:37.407717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:42.447 [2024-11-26 20:58:37.407739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:42.447 [2024-11-26 20:58:37.407749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:42.447 [2024-11-26 20:58:37.407760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:42.447 [2024-11-26 20:58:37.407770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:42.447 [2024-11-26 20:58:37.407780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:42.447 [2024-11-26 20:58:37.407790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:42.447 [2024-11-26 20:58:37.407801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:42.447 [2024-11-26 20:58:37.407811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:42.447 [2024-11-26 20:58:37.407821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:42.447 [2024-11-26 20:58:37.407873] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:42.447 [2024-11-26 20:58:37.407885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:42.447 [2024-11-26 20:58:37.407906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:42.447 [2024-11-26 20:58:37.407917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:42.447 [2024-11-26 20:58:37.407927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:42.447 [2024-11-26 20:58:37.407938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.447 [2024-11-26 20:58:37.407949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:42.447 [2024-11-26 20:58:37.407959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:30:42.447 [2024-11-26 20:58:37.407969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.448163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.448211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:42.707 [2024-11-26 20:58:37.448227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.140 ms 00:30:42.707 [2024-11-26 20:58:37.448239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.448337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.448349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:42.707 [2024-11-26 20:58:37.448360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:42.707 [2024-11-26 20:58:37.448370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.506786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.507018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:42.707 [2024-11-26 20:58:37.507049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.341 ms 00:30:42.707 [2024-11-26 20:58:37.507061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.507125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.507137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:42.707 [2024-11-26 20:58:37.507148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:42.707 [2024-11-26 20:58:37.507158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.507699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.507716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:42.707 [2024-11-26 20:58:37.507728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:30:42.707 [2024-11-26 20:58:37.507746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.507867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.507880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:42.707 [2024-11-26 20:58:37.507891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:30:42.707 [2024-11-26 20:58:37.507901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.527360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.527400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:42.707 [2024-11-26 20:58:37.527415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.437 ms 00:30:42.707 [2024-11-26 20:58:37.527442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.546699] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:42.707 [2024-11-26 20:58:37.546741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:42.707 [2024-11-26 20:58:37.546758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.546769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:42.707 [2024-11-26 20:58:37.546783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.164 ms 00:30:42.707 [2024-11-26 20:58:37.546793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.577203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.577418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:42.707 [2024-11-26 20:58:37.577442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.355 ms 00:30:42.707 [2024-11-26 20:58:37.577455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.597034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.597088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:42.707 [2024-11-26 20:58:37.597103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.517 ms 00:30:42.707 [2024-11-26 20:58:37.597112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.615163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.615339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:42.707 [2024-11-26 20:58:37.615362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.007 ms 00:30:42.707 [2024-11-26 20:58:37.615373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.707 [2024-11-26 20:58:37.616384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.707 [2024-11-26 20:58:37.616416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:42.707 [2024-11-26 20:58:37.616434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:30:42.707 [2024-11-26 20:58:37.616450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.706881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.706943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:42.967 [2024-11-26 20:58:37.706959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.392 ms 00:30:42.967 [2024-11-26 20:58:37.706986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.718868] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:42.967 [2024-11-26 20:58:37.722318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.722355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:42.967 [2024-11-26 20:58:37.722371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.252 ms 00:30:42.967 [2024-11-26 20:58:37.722388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.722511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.722526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:42.967 [2024-11-26 20:58:37.722538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:42.967 [2024-11-26 20:58:37.722548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.722670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.722685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:42.967 [2024-11-26 20:58:37.722696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:42.967 [2024-11-26 20:58:37.722708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.722739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.722750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:42.967 [2024-11-26 20:58:37.722761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:42.967 [2024-11-26 20:58:37.722771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.722808] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:42.967 [2024-11-26 20:58:37.722820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.722831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:42.967 [2024-11-26 20:58:37.722841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:42.967 [2024-11-26 20:58:37.722856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.760643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.760848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:42.967 [2024-11-26 20:58:37.760874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.764 ms 00:30:42.967 [2024-11-26 20:58:37.760885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.761046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:42.967 [2024-11-26 20:58:37.761061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:42.967 [2024-11-26 20:58:37.761073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:42.967 [2024-11-26 20:58:37.761083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:42.967 [2024-11-26 20:58:37.762271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.691 ms, result 0 00:30:43.904  [2024-11-26T20:58:39.834Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T20:58:41.211Z] Copying: 58/1024 [MB] (28 MBps) [2024-11-26T20:58:41.777Z] Copying: 85/1024 [MB] (27 MBps) [2024-11-26T20:58:43.154Z] Copying: 114/1024 [MB] (28 MBps) [2024-11-26T20:58:44.090Z] Copying: 143/1024 [MB] (29 MBps) [2024-11-26T20:58:45.025Z] Copying: 171/1024 [MB] (27 MBps) [2024-11-26T20:58:45.960Z] Copying: 200/1024 [MB] (29 MBps) [2024-11-26T20:58:46.896Z] Copying: 230/1024 [MB] (29 MBps) [2024-11-26T20:58:47.831Z] Copying: 259/1024 [MB] (28 MBps) [2024-11-26T20:58:49.207Z] Copying: 288/1024 [MB] (28 MBps) [2024-11-26T20:58:50.143Z] Copying: 317/1024 [MB] (28 MBps) [2024-11-26T20:58:51.083Z] Copying: 345/1024 [MB] (28 MBps) [2024-11-26T20:58:52.019Z] Copying: 374/1024 [MB] (28 MBps) [2024-11-26T20:58:52.957Z] Copying: 402/1024 [MB] (28 MBps) [2024-11-26T20:58:53.894Z] Copying: 431/1024 [MB] (28 MBps) [2024-11-26T20:58:54.830Z] Copying: 459/1024 [MB] (27 MBps) [2024-11-26T20:58:56.207Z] Copying: 487/1024 [MB] (28 MBps) [2024-11-26T20:58:57.142Z] Copying: 516/1024 [MB] (28 MBps) [2024-11-26T20:58:58.110Z] Copying: 544/1024 [MB] (28 MBps) [2024-11-26T20:58:59.044Z] Copying: 573/1024 [MB] (28 MBps) [2024-11-26T20:58:59.987Z] Copying: 602/1024 [MB] (28 MBps) [2024-11-26T20:59:00.923Z] Copying: 632/1024 [MB] (29 MBps) [2024-11-26T20:59:01.859Z] Copying: 662/1024 [MB] (29 MBps) [2024-11-26T20:59:02.794Z] Copying: 691/1024 [MB] (29 MBps) [2024-11-26T20:59:04.171Z] Copying: 720/1024 [MB] (29 MBps) [2024-11-26T20:59:05.111Z] Copying: 748/1024 [MB] (28 MBps) [2024-11-26T20:59:06.050Z] Copying: 777/1024 [MB] (29 MBps) [2024-11-26T20:59:06.994Z] Copying: 807/1024 [MB] (29 MBps) [2024-11-26T20:59:07.939Z] Copying: 836/1024 [MB] (29 MBps) [2024-11-26T20:59:08.879Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-26T20:59:09.816Z] Copying: 892/1024 [MB] (27 MBps) [2024-11-26T20:59:11.197Z] Copying: 917/1024 [MB] (25 MBps) [2024-11-26T20:59:12.135Z] Copying: 943/1024 [MB] (25 MBps) [2024-11-26T20:59:13.073Z] Copying: 970/1024 [MB] (27 MBps) [2024-11-26T20:59:14.011Z] Copying: 996/1024 [MB] (25 MBps) [2024-11-26T20:59:14.950Z] Copying: 1021/1024 [MB] (25 MBps) [2024-11-26T20:59:14.950Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 20:59:14.625367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.956 [2024-11-26 20:59:14.625436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:19.956 [2024-11-26 20:59:14.625456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:19.956 [2024-11-26 20:59:14.625470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.627252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:19.957 [2024-11-26 20:59:14.632408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.632450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:19.957 [2024-11-26 20:59:14.632463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:31:19.957 [2024-11-26 20:59:14.632482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.643043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.643096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:19.957 [2024-11-26 20:59:14.643110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.243 ms 00:31:19.957 [2024-11-26 20:59:14.643121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.663527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.663585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:19.957 [2024-11-26 20:59:14.663602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.388 ms 00:31:19.957 [2024-11-26 20:59:14.663630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.668538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.668570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:19.957 [2024-11-26 20:59:14.668582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:31:19.957 [2024-11-26 20:59:14.668592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.704546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.704586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:19.957 [2024-11-26 20:59:14.704600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.890 ms 00:31:19.957 [2024-11-26 20:59:14.704622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.726107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.726159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:19.957 [2024-11-26 20:59:14.726174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:31:19.957 [2024-11-26 20:59:14.726185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.822691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.822833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:19.957 [2024-11-26 20:59:14.822927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.463 ms 00:31:19.957 [2024-11-26 20:59:14.822965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.858981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.859137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:19.957 [2024-11-26 20:59:14.859217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.971 ms 00:31:19.957 [2024-11-26 20:59:14.859268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.893739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.893769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:19.957 [2024-11-26 20:59:14.893782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.413 ms 00:31:19.957 [2024-11-26 20:59:14.893792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.957 [2024-11-26 20:59:14.927646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.957 [2024-11-26 20:59:14.927681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:19.957 [2024-11-26 20:59:14.927694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.817 ms 00:31:19.957 [2024-11-26 20:59:14.927704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.218 [2024-11-26 20:59:14.962178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.218 [2024-11-26 20:59:14.962214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:20.218 [2024-11-26 20:59:14.962228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.399 ms 00:31:20.218 [2024-11-26 20:59:14.962237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.218 [2024-11-26 20:59:14.962273] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:20.218 [2024-11-26 20:59:14.962288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107264 / 261120 wr_cnt: 1 state: open 00:31:20.218 [2024-11-26 20:59:14.962301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:20.218 [2024-11-26 20:59:14.962987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.962997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:20.219 [2024-11-26 20:59:14.963436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:20.219 [2024-11-26 20:59:14.963446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e81524cd-e20b-431d-be2e-32d90f4abaa5 00:31:20.219 [2024-11-26 20:59:14.963471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107264 00:31:20.219 [2024-11-26 20:59:14.963481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108224 00:31:20.219 [2024-11-26 20:59:14.963490] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107264 00:31:20.219 [2024-11-26 20:59:14.963501] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:31:20.219 [2024-11-26 20:59:14.963510] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:20.219 [2024-11-26 20:59:14.963520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:20.219 [2024-11-26 20:59:14.963530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:20.219 [2024-11-26 20:59:14.963538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:20.219 [2024-11-26 20:59:14.963548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:20.219 [2024-11-26 20:59:14.963557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.219 [2024-11-26 20:59:14.963567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:20.219 [2024-11-26 20:59:14.963587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:31:20.219 [2024-11-26 20:59:14.963597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:14.982058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.219 [2024-11-26 20:59:14.982091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:20.219 [2024-11-26 20:59:14.982103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:31:20.219 [2024-11-26 20:59:14.982113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:14.982638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.219 [2024-11-26 20:59:14.982652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:20.219 [2024-11-26 20:59:14.982668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:31:20.219 [2024-11-26 20:59:14.982678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:15.030649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.219 [2024-11-26 20:59:15.030684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:20.219 [2024-11-26 20:59:15.030696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.219 [2024-11-26 20:59:15.030706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:15.030762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.219 [2024-11-26 20:59:15.030773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:20.219 [2024-11-26 20:59:15.030787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.219 [2024-11-26 20:59:15.030796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:15.030858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.219 [2024-11-26 20:59:15.030871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:20.219 [2024-11-26 20:59:15.030882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.219 [2024-11-26 20:59:15.030892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.219 [2024-11-26 20:59:15.030908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.220 [2024-11-26 20:59:15.030918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:20.220 [2024-11-26 20:59:15.030928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.220 [2024-11-26 20:59:15.030938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.220 [2024-11-26 20:59:15.152039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.220 [2024-11-26 20:59:15.152099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:20.220 [2024-11-26 20:59:15.152115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.220 [2024-11-26 20:59:15.152125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.249868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:20.479 [2024-11-26 20:59:15.250129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:20.479 [2024-11-26 20:59:15.250267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:20.479 [2024-11-26 20:59:15.250334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:20.479 [2024-11-26 20:59:15.250476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:20.479 [2024-11-26 20:59:15.250554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:20.479 [2024-11-26 20:59:15.250665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.479 [2024-11-26 20:59:15.250675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.479 [2024-11-26 20:59:15.250720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:20.479 [2024-11-26 20:59:15.250733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:20.480 [2024-11-26 20:59:15.250743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:20.480 [2024-11-26 20:59:15.250754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.480 [2024-11-26 20:59:15.250881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 625.702 ms, result 0 00:31:22.408 00:31:22.408 00:31:22.408 20:59:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:24.317 20:59:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:24.317 [2024-11-26 20:59:19.048831] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:24.317 [2024-11-26 20:59:19.049032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82939 ] 00:31:24.317 [2024-11-26 20:59:19.246404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.576 [2024-11-26 20:59:19.393949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.835 [2024-11-26 20:59:19.740105] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:24.835 [2024-11-26 20:59:19.740172] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:25.094 [2024-11-26 20:59:19.900971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.901024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:25.094 [2024-11-26 20:59:19.901039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:25.094 [2024-11-26 20:59:19.901049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.901095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.901110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:25.094 [2024-11-26 20:59:19.901120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:25.094 [2024-11-26 20:59:19.901129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.901149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:25.094 [2024-11-26 20:59:19.902183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:25.094 [2024-11-26 20:59:19.902213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.902224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:25.094 [2024-11-26 20:59:19.902234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:31:25.094 [2024-11-26 20:59:19.902244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.903695] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:25.094 [2024-11-26 20:59:19.922987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.923025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:25.094 [2024-11-26 20:59:19.923039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.293 ms 00:31:25.094 [2024-11-26 20:59:19.923051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.923119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.923132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:25.094 [2024-11-26 20:59:19.923144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:25.094 [2024-11-26 20:59:19.923154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.930034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.930175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:25.094 [2024-11-26 20:59:19.930298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:31:25.094 [2024-11-26 20:59:19.930408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.930528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.930575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:25.094 [2024-11-26 20:59:19.930711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:25.094 [2024-11-26 20:59:19.930749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.930824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.930862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:25.094 [2024-11-26 20:59:19.930896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:25.094 [2024-11-26 20:59:19.930985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.094 [2024-11-26 20:59:19.931113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:25.094 [2024-11-26 20:59:19.936145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.094 [2024-11-26 20:59:19.936289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:25.094 [2024-11-26 20:59:19.936422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:31:25.094 [2024-11-26 20:59:19.936462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.095 [2024-11-26 20:59:19.936636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.095 [2024-11-26 20:59:19.936684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:25.095 [2024-11-26 20:59:19.936720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:25.095 [2024-11-26 20:59:19.936752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.095 [2024-11-26 20:59:19.936863] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:25.095 [2024-11-26 20:59:19.936918] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:25.095 [2024-11-26 20:59:19.937058] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:25.095 [2024-11-26 20:59:19.937129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:25.095 [2024-11-26 20:59:19.937319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:25.095 [2024-11-26 20:59:19.937480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:25.095 [2024-11-26 20:59:19.937498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:25.095 [2024-11-26 20:59:19.937512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:25.095 [2024-11-26 20:59:19.937525] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:25.095 [2024-11-26 20:59:19.937538] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:25.095 [2024-11-26 20:59:19.937549] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:25.095 [2024-11-26 20:59:19.937565] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:25.095 [2024-11-26 20:59:19.937575] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:25.095 [2024-11-26 20:59:19.937587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.095 [2024-11-26 20:59:19.937598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:25.095 [2024-11-26 20:59:19.937610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:31:25.095 [2024-11-26 20:59:19.937632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.095 [2024-11-26 20:59:19.937717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.095 [2024-11-26 20:59:19.937730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:25.095 [2024-11-26 20:59:19.937741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:25.095 [2024-11-26 20:59:19.937752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.095 [2024-11-26 20:59:19.937852] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:25.095 [2024-11-26 20:59:19.937868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:25.095 [2024-11-26 20:59:19.937879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.095 [2024-11-26 20:59:19.937890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.937901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:25.095 [2024-11-26 20:59:19.937912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.937921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:25.095 [2024-11-26 20:59:19.937931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:25.095 [2024-11-26 20:59:19.937941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:25.095 [2024-11-26 20:59:19.937951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.095 [2024-11-26 20:59:19.937961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:25.095 [2024-11-26 20:59:19.937971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:25.095 [2024-11-26 20:59:19.937980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:25.095 [2024-11-26 20:59:19.937999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:25.095 [2024-11-26 20:59:19.938010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:25.095 [2024-11-26 20:59:19.938019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:25.095 [2024-11-26 20:59:19.938038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:25.095 [2024-11-26 20:59:19.938068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:25.095 [2024-11-26 20:59:19.938097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:25.095 [2024-11-26 20:59:19.938125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:25.095 [2024-11-26 20:59:19.938155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:25.095 [2024-11-26 20:59:19.938188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.095 [2024-11-26 20:59:19.938207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:25.095 [2024-11-26 20:59:19.938217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:25.095 [2024-11-26 20:59:19.938226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:25.095 [2024-11-26 20:59:19.938235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:25.095 [2024-11-26 20:59:19.938244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:25.095 [2024-11-26 20:59:19.938254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:25.095 [2024-11-26 20:59:19.938272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:25.095 [2024-11-26 20:59:19.938281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:25.095 [2024-11-26 20:59:19.938301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:25.095 [2024-11-26 20:59:19.938310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:25.095 [2024-11-26 20:59:19.938331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:25.095 [2024-11-26 20:59:19.938340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:25.095 [2024-11-26 20:59:19.938350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:25.095 [2024-11-26 20:59:19.938359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:25.095 [2024-11-26 20:59:19.938368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:25.095 [2024-11-26 20:59:19.938378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:25.095 [2024-11-26 20:59:19.938388] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:25.095 [2024-11-26 20:59:19.938400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:25.095 [2024-11-26 20:59:19.938428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:25.095 [2024-11-26 20:59:19.938438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:25.095 [2024-11-26 20:59:19.938449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:25.095 [2024-11-26 20:59:19.938460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:25.095 [2024-11-26 20:59:19.938470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:25.095 [2024-11-26 20:59:19.938481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:25.095 [2024-11-26 20:59:19.938494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:25.095 [2024-11-26 20:59:19.938504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:25.095 [2024-11-26 20:59:19.938515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:25.095 [2024-11-26 20:59:19.938577] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:25.095 [2024-11-26 20:59:19.938588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:25.095 [2024-11-26 20:59:19.938599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:25.096 [2024-11-26 20:59:19.938610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:25.096 [2024-11-26 20:59:19.938632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:25.096 [2024-11-26 20:59:19.938670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:25.096 [2024-11-26 20:59:19.938681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:19.938692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:25.096 [2024-11-26 20:59:19.938702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:31:25.096 [2024-11-26 20:59:19.938712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:19.977884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:19.977920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:25.096 [2024-11-26 20:59:19.977933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.125 ms 00:31:25.096 [2024-11-26 20:59:19.977948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:19.978028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:19.978039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:25.096 [2024-11-26 20:59:19.978049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:25.096 [2024-11-26 20:59:19.978058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.037452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.037645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:25.096 [2024-11-26 20:59:20.037671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.330 ms 00:31:25.096 [2024-11-26 20:59:20.037683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.037736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.037748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:25.096 [2024-11-26 20:59:20.037766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:25.096 [2024-11-26 20:59:20.037777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.038273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.038294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:25.096 [2024-11-26 20:59:20.038306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:31:25.096 [2024-11-26 20:59:20.038317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.038434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.038448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:25.096 [2024-11-26 20:59:20.038466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:25.096 [2024-11-26 20:59:20.038477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.058642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.058794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:25.096 [2024-11-26 20:59:20.058967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.142 ms 00:31:25.096 [2024-11-26 20:59:20.059008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.096 [2024-11-26 20:59:20.078235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:25.096 [2024-11-26 20:59:20.078403] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:25.096 [2024-11-26 20:59:20.078534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.096 [2024-11-26 20:59:20.078570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:25.096 [2024-11-26 20:59:20.078603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.370 ms 00:31:25.096 [2024-11-26 20:59:20.078668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.108419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.108569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:25.355 [2024-11-26 20:59:20.108739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.685 ms 00:31:25.355 [2024-11-26 20:59:20.108780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.127197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.127340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:25.355 [2024-11-26 20:59:20.127449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.351 ms 00:31:25.355 [2024-11-26 20:59:20.127486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.145423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.145568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:25.355 [2024-11-26 20:59:20.145689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.878 ms 00:31:25.355 [2024-11-26 20:59:20.145728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.146517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.146661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:25.355 [2024-11-26 20:59:20.146744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:31:25.355 [2024-11-26 20:59:20.146779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.232443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.232660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:25.355 [2024-11-26 20:59:20.232790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.615 ms 00:31:25.355 [2024-11-26 20:59:20.232828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.243834] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:25.355 [2024-11-26 20:59:20.246887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.247026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:25.355 [2024-11-26 20:59:20.247163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:31:25.355 [2024-11-26 20:59:20.247202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.247326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.247366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:25.355 [2024-11-26 20:59:20.247461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:25.355 [2024-11-26 20:59:20.247498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.355 [2024-11-26 20:59:20.249099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.355 [2024-11-26 20:59:20.249231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:25.355 [2024-11-26 20:59:20.249301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:31:25.355 [2024-11-26 20:59:20.249337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.356 [2024-11-26 20:59:20.249395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.356 [2024-11-26 20:59:20.249521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:25.356 [2024-11-26 20:59:20.249559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:25.356 [2024-11-26 20:59:20.249597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.356 [2024-11-26 20:59:20.249716] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:25.356 [2024-11-26 20:59:20.249758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.356 [2024-11-26 20:59:20.249790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:25.356 [2024-11-26 20:59:20.249859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:25.356 [2024-11-26 20:59:20.249894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.356 [2024-11-26 20:59:20.287372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.356 [2024-11-26 20:59:20.287407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:25.356 [2024-11-26 20:59:20.287427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.427 ms 00:31:25.356 [2024-11-26 20:59:20.287437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.356 [2024-11-26 20:59:20.287508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.356 [2024-11-26 20:59:20.287521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:25.356 [2024-11-26 20:59:20.287531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:25.356 [2024-11-26 20:59:20.287542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.356 [2024-11-26 20:59:20.288674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 387.238 ms, result 0 00:31:26.734  [2024-11-26T20:59:22.665Z] Copying: 1164/1048576 [kB] (1164 kBps) [2024-11-26T20:59:23.602Z] Copying: 8740/1048576 [kB] (7576 kBps) [2024-11-26T20:59:24.541Z] Copying: 44/1024 [MB] (35 MBps) [2024-11-26T20:59:25.919Z] Copying: 80/1024 [MB] (35 MBps) [2024-11-26T20:59:26.856Z] Copying: 118/1024 [MB] (38 MBps) [2024-11-26T20:59:27.792Z] Copying: 156/1024 [MB] (38 MBps) [2024-11-26T20:59:28.728Z] Copying: 195/1024 [MB] (39 MBps) [2024-11-26T20:59:29.678Z] Copying: 235/1024 [MB] (39 MBps) [2024-11-26T20:59:30.613Z] Copying: 274/1024 [MB] (39 MBps) [2024-11-26T20:59:31.549Z] Copying: 312/1024 [MB] (37 MBps) [2024-11-26T20:59:32.925Z] Copying: 351/1024 [MB] (39 MBps) [2024-11-26T20:59:33.861Z] Copying: 390/1024 [MB] (38 MBps) [2024-11-26T20:59:34.798Z] Copying: 428/1024 [MB] (38 MBps) [2024-11-26T20:59:35.734Z] Copying: 466/1024 [MB] (38 MBps) [2024-11-26T20:59:36.669Z] Copying: 505/1024 [MB] (39 MBps) [2024-11-26T20:59:37.604Z] Copying: 544/1024 [MB] (38 MBps) [2024-11-26T20:59:38.540Z] Copying: 583/1024 [MB] (38 MBps) [2024-11-26T20:59:39.924Z] Copying: 618/1024 [MB] (35 MBps) [2024-11-26T20:59:40.859Z] Copying: 653/1024 [MB] (34 MBps) [2024-11-26T20:59:41.794Z] Copying: 688/1024 [MB] (34 MBps) [2024-11-26T20:59:42.728Z] Copying: 722/1024 [MB] (34 MBps) [2024-11-26T20:59:43.662Z] Copying: 760/1024 [MB] (37 MBps) [2024-11-26T20:59:44.597Z] Copying: 799/1024 [MB] (38 MBps) [2024-11-26T20:59:45.533Z] Copying: 837/1024 [MB] (38 MBps) [2024-11-26T20:59:46.908Z] Copying: 877/1024 [MB] (39 MBps) [2024-11-26T20:59:47.844Z] Copying: 916/1024 [MB] (39 MBps) [2024-11-26T20:59:48.780Z] Copying: 957/1024 [MB] (40 MBps) [2024-11-26T20:59:49.358Z] Copying: 997/1024 [MB] (40 MBps) [2024-11-26T20:59:49.617Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-26 20:59:49.426311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.426624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:54.623 [2024-11-26 20:59:49.426652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:54.623 [2024-11-26 20:59:49.426664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.426702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:54.623 [2024-11-26 20:59:49.431248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.431285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:54.623 [2024-11-26 20:59:49.431298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.525 ms 00:31:54.623 [2024-11-26 20:59:49.431309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.431523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.431537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:54.623 [2024-11-26 20:59:49.431548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:31:54.623 [2024-11-26 20:59:49.431559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.441470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.441519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:54.623 [2024-11-26 20:59:49.441536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.883 ms 00:31:54.623 [2024-11-26 20:59:49.441550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.447292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.447335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:54.623 [2024-11-26 20:59:49.447348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.704 ms 00:31:54.623 [2024-11-26 20:59:49.447359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.483797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.483961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:54.623 [2024-11-26 20:59:49.483998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.372 ms 00:31:54.623 [2024-11-26 20:59:49.484009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.503662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.503699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:54.623 [2024-11-26 20:59:49.503713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.614 ms 00:31:54.623 [2024-11-26 20:59:49.503724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.505577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.505630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:54.623 [2024-11-26 20:59:49.505651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:31:54.623 [2024-11-26 20:59:49.505662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.540359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.540393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:54.623 [2024-11-26 20:59:49.540405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.678 ms 00:31:54.623 [2024-11-26 20:59:49.540415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.574753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.574786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:54.623 [2024-11-26 20:59:49.574798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.299 ms 00:31:54.623 [2024-11-26 20:59:49.574808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.623 [2024-11-26 20:59:49.610413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.623 [2024-11-26 20:59:49.610458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:54.623 [2024-11-26 20:59:49.610471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.567 ms 00:31:54.623 [2024-11-26 20:59:49.610480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.882 [2024-11-26 20:59:49.645824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.882 [2024-11-26 20:59:49.645857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:54.882 [2024-11-26 20:59:49.645870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.266 ms 00:31:54.882 [2024-11-26 20:59:49.645879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.882 [2024-11-26 20:59:49.645915] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:54.882 [2024-11-26 20:59:49.645931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:54.882 [2024-11-26 20:59:49.645943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:54.882 [2024-11-26 20:59:49.645953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:54.882 [2024-11-26 20:59:49.645964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:54.882 [2024-11-26 20:59:49.645974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:54.882 [2024-11-26 20:59:49.645984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:54.882 [2024-11-26 20:59:49.645994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:54.882 [2024-11-26 20:59:49.646004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:54.883 [2024-11-26 20:59:49.646781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:54.884 [2024-11-26 20:59:49.646976] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:54.884 [2024-11-26 20:59:49.646986] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e81524cd-e20b-431d-be2e-32d90f4abaa5 00:31:54.884 [2024-11-26 20:59:49.646997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:54.884 [2024-11-26 20:59:49.647017] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157376 00:31:54.884 [2024-11-26 20:59:49.647031] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155392 00:31:54.884 [2024-11-26 20:59:49.647041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:31:54.884 [2024-11-26 20:59:49.647051] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:54.884 [2024-11-26 20:59:49.647072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:54.884 [2024-11-26 20:59:49.647082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:54.884 [2024-11-26 20:59:49.647091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:54.884 [2024-11-26 20:59:49.647100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:54.884 [2024-11-26 20:59:49.647110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.884 [2024-11-26 20:59:49.647121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:54.884 [2024-11-26 20:59:49.647131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:31:54.884 [2024-11-26 20:59:49.647141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.666808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.884 [2024-11-26 20:59:49.666839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:54.884 [2024-11-26 20:59:49.666852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.631 ms 00:31:54.884 [2024-11-26 20:59:49.666862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.667418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.884 [2024-11-26 20:59:49.667439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:54.884 [2024-11-26 20:59:49.667450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:31:54.884 [2024-11-26 20:59:49.667467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.717270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.884 [2024-11-26 20:59:49.717305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:54.884 [2024-11-26 20:59:49.717317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.884 [2024-11-26 20:59:49.717344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.717395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.884 [2024-11-26 20:59:49.717407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:54.884 [2024-11-26 20:59:49.717416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.884 [2024-11-26 20:59:49.717432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.717496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.884 [2024-11-26 20:59:49.717509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:54.884 [2024-11-26 20:59:49.717519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.884 [2024-11-26 20:59:49.717529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.717546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.884 [2024-11-26 20:59:49.717557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:54.884 [2024-11-26 20:59:49.717567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.884 [2024-11-26 20:59:49.717577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.884 [2024-11-26 20:59:49.839736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.884 [2024-11-26 20:59:49.839799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:54.884 [2024-11-26 20:59:49.839815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.884 [2024-11-26 20:59:49.839826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:55.143 [2024-11-26 20:59:49.942421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:55.143 [2024-11-26 20:59:49.942562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:55.143 [2024-11-26 20:59:49.942668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:55.143 [2024-11-26 20:59:49.942838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:55.143 [2024-11-26 20:59:49.942908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.942957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.942977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:55.143 [2024-11-26 20:59:49.942988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.942998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.943043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:55.143 [2024-11-26 20:59:49.943055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:55.143 [2024-11-26 20:59:49.943065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:55.143 [2024-11-26 20:59:49.943075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.143 [2024-11-26 20:59:49.943226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.849 ms, result 0 00:31:56.080 00:31:56.080 00:31:56.080 20:59:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:57.980 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:57.981 20:59:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:57.981 [2024-11-26 20:59:52.806441] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:31:57.981 [2024-11-26 20:59:52.806566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83277 ] 00:31:58.238 [2024-11-26 20:59:52.989872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.239 [2024-11-26 20:59:53.159225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.807 [2024-11-26 20:59:53.513122] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.807 [2024-11-26 20:59:53.513185] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.807 [2024-11-26 20:59:53.674842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.674895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:58.807 [2024-11-26 20:59:53.674911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:58.807 [2024-11-26 20:59:53.674923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.674973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.674989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:58.807 [2024-11-26 20:59:53.675001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:58.807 [2024-11-26 20:59:53.675011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.675033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:58.807 [2024-11-26 20:59:53.676155] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:58.807 [2024-11-26 20:59:53.676192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.676204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:58.807 [2024-11-26 20:59:53.676216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:31:58.807 [2024-11-26 20:59:53.676226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.677701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:58.807 [2024-11-26 20:59:53.696731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.696770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:58.807 [2024-11-26 20:59:53.696785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.031 ms 00:31:58.807 [2024-11-26 20:59:53.696795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.696865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.696878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:58.807 [2024-11-26 20:59:53.696889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:58.807 [2024-11-26 20:59:53.696899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.703814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.703842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:58.807 [2024-11-26 20:59:53.703854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:31:58.807 [2024-11-26 20:59:53.703868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.703949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.703962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:58.807 [2024-11-26 20:59:53.703973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:58.807 [2024-11-26 20:59:53.703983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.704024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.704037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:58.807 [2024-11-26 20:59:53.704047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:58.807 [2024-11-26 20:59:53.704057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.704088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:58.807 [2024-11-26 20:59:53.708839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.708871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:58.807 [2024-11-26 20:59:53.708886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.758 ms 00:31:58.807 [2024-11-26 20:59:53.708897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.708926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.708937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:58.807 [2024-11-26 20:59:53.708947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:58.807 [2024-11-26 20:59:53.708957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.709006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:58.807 [2024-11-26 20:59:53.709029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:58.807 [2024-11-26 20:59:53.709063] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:58.807 [2024-11-26 20:59:53.709084] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:58.807 [2024-11-26 20:59:53.709171] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:58.807 [2024-11-26 20:59:53.709183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:58.807 [2024-11-26 20:59:53.709196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:58.807 [2024-11-26 20:59:53.709208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709231] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:58.807 [2024-11-26 20:59:53.709241] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:58.807 [2024-11-26 20:59:53.709253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:58.807 [2024-11-26 20:59:53.709263] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:58.807 [2024-11-26 20:59:53.709273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.709282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:58.807 [2024-11-26 20:59:53.709292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:31:58.807 [2024-11-26 20:59:53.709301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.709368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.807 [2024-11-26 20:59:53.709379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:58.807 [2024-11-26 20:59:53.709389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:58.807 [2024-11-26 20:59:53.709398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.807 [2024-11-26 20:59:53.709488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:58.807 [2024-11-26 20:59:53.709502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:58.807 [2024-11-26 20:59:53.709512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:58.807 [2024-11-26 20:59:53.709540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:58.807 [2024-11-26 20:59:53.709569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.807 [2024-11-26 20:59:53.709586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:58.807 [2024-11-26 20:59:53.709596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:58.807 [2024-11-26 20:59:53.709605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.807 [2024-11-26 20:59:53.709643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:58.807 [2024-11-26 20:59:53.709669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:58.807 [2024-11-26 20:59:53.709679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:58.807 [2024-11-26 20:59:53.709697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:58.807 [2024-11-26 20:59:53.709724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.807 [2024-11-26 20:59:53.709743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:58.807 [2024-11-26 20:59:53.709752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:58.807 [2024-11-26 20:59:53.709761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.808 [2024-11-26 20:59:53.709770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:58.808 [2024-11-26 20:59:53.709779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.808 [2024-11-26 20:59:53.709797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:58.808 [2024-11-26 20:59:53.709806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.808 [2024-11-26 20:59:53.709824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:58.808 [2024-11-26 20:59:53.709833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.808 [2024-11-26 20:59:53.709851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:58.808 [2024-11-26 20:59:53.709860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:58.808 [2024-11-26 20:59:53.709869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.808 [2024-11-26 20:59:53.709880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:58.808 [2024-11-26 20:59:53.709889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:58.808 [2024-11-26 20:59:53.709897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:58.808 [2024-11-26 20:59:53.709915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:58.808 [2024-11-26 20:59:53.709924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709935] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:58.808 [2024-11-26 20:59:53.709945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:58.808 [2024-11-26 20:59:53.709955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.808 [2024-11-26 20:59:53.709965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.808 [2024-11-26 20:59:53.709975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:58.808 [2024-11-26 20:59:53.709984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:58.808 [2024-11-26 20:59:53.709993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:58.808 [2024-11-26 20:59:53.710003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:58.808 [2024-11-26 20:59:53.710012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:58.808 [2024-11-26 20:59:53.710021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:58.808 [2024-11-26 20:59:53.710032] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:58.808 [2024-11-26 20:59:53.710044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:58.808 [2024-11-26 20:59:53.710086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:58.808 [2024-11-26 20:59:53.710097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:58.808 [2024-11-26 20:59:53.710107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:58.808 [2024-11-26 20:59:53.710118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:58.808 [2024-11-26 20:59:53.710128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:58.808 [2024-11-26 20:59:53.710138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:58.808 [2024-11-26 20:59:53.710149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:58.808 [2024-11-26 20:59:53.710159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:58.808 [2024-11-26 20:59:53.710170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:58.808 [2024-11-26 20:59:53.710222] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:58.808 [2024-11-26 20:59:53.710233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:58.808 [2024-11-26 20:59:53.710254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:58.808 [2024-11-26 20:59:53.710265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:58.808 [2024-11-26 20:59:53.710275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:58.808 [2024-11-26 20:59:53.710287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.808 [2024-11-26 20:59:53.710299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:58.808 [2024-11-26 20:59:53.710309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:31:58.808 [2024-11-26 20:59:53.710320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.808 [2024-11-26 20:59:53.749780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.808 [2024-11-26 20:59:53.749952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:58.808 [2024-11-26 20:59:53.749992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.409 ms 00:31:58.808 [2024-11-26 20:59:53.750010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.808 [2024-11-26 20:59:53.750096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.808 [2024-11-26 20:59:53.750108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:58.808 [2024-11-26 20:59:53.750119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:58.808 [2024-11-26 20:59:53.750129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.067 [2024-11-26 20:59:53.805376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.067 [2024-11-26 20:59:53.805411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.067 [2024-11-26 20:59:53.805425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.175 ms 00:31:59.067 [2024-11-26 20:59:53.805435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.067 [2024-11-26 20:59:53.805474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.067 [2024-11-26 20:59:53.805485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.067 [2024-11-26 20:59:53.805500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:59.068 [2024-11-26 20:59:53.805509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.806031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.806046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.068 [2024-11-26 20:59:53.806074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:31:59.068 [2024-11-26 20:59:53.806084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.806207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.806221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.068 [2024-11-26 20:59:53.806238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:31:59.068 [2024-11-26 20:59:53.806248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.825111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.825151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.068 [2024-11-26 20:59:53.825165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.841 ms 00:31:59.068 [2024-11-26 20:59:53.825175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.844483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:59.068 [2024-11-26 20:59:53.844519] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:59.068 [2024-11-26 20:59:53.844534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.844544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:59.068 [2024-11-26 20:59:53.844571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.248 ms 00:31:59.068 [2024-11-26 20:59:53.844581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.873866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.873918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:59.068 [2024-11-26 20:59:53.873932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.231 ms 00:31:59.068 [2024-11-26 20:59:53.873942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.891857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.891892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:59.068 [2024-11-26 20:59:53.891905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 00:31:59.068 [2024-11-26 20:59:53.891915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.909747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.909780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:59.068 [2024-11-26 20:59:53.909792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.792 ms 00:31:59.068 [2024-11-26 20:59:53.909801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.910513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.910536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:59.068 [2024-11-26 20:59:53.910551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:31:59.068 [2024-11-26 20:59:53.910560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:53.994519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:53.994580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:59.068 [2024-11-26 20:59:53.994604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.936 ms 00:31:59.068 [2024-11-26 20:59:53.994627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.005263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:59.068 [2024-11-26 20:59:54.008100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.008131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.068 [2024-11-26 20:59:54.008144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.400 ms 00:31:59.068 [2024-11-26 20:59:54.008155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.008247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.008260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:59.068 [2024-11-26 20:59:54.008276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:59.068 [2024-11-26 20:59:54.008285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.009171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.009191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.068 [2024-11-26 20:59:54.009202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:31:59.068 [2024-11-26 20:59:54.009212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.009238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.009249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:59.068 [2024-11-26 20:59:54.009259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:59.068 [2024-11-26 20:59:54.009269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.009309] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:59.068 [2024-11-26 20:59:54.009321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.009331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:59.068 [2024-11-26 20:59:54.009342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:59.068 [2024-11-26 20:59:54.009352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.044820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.044856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:59.068 [2024-11-26 20:59:54.044876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.431 ms 00:31:59.068 [2024-11-26 20:59:54.044885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.044955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.068 [2024-11-26 20:59:54.044968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:59.068 [2024-11-26 20:59:54.044978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:59.068 [2024-11-26 20:59:54.044988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.068 [2024-11-26 20:59:54.046119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.776 ms, result 0 00:32:00.443  [2024-11-26T20:59:56.371Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-26T20:59:57.305Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-26T20:59:58.270Z] Copying: 89/1024 [MB] (29 MBps) [2024-11-26T20:59:59.709Z] Copying: 118/1024 [MB] (29 MBps) [2024-11-26T21:00:00.275Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-26T21:00:01.648Z] Copying: 178/1024 [MB] (30 MBps) [2024-11-26T21:00:02.588Z] Copying: 209/1024 [MB] (30 MBps) [2024-11-26T21:00:03.540Z] Copying: 239/1024 [MB] (29 MBps) [2024-11-26T21:00:04.475Z] Copying: 268/1024 [MB] (28 MBps) [2024-11-26T21:00:05.410Z] Copying: 297/1024 [MB] (29 MBps) [2024-11-26T21:00:06.348Z] Copying: 326/1024 [MB] (29 MBps) [2024-11-26T21:00:07.283Z] Copying: 355/1024 [MB] (29 MBps) [2024-11-26T21:00:08.665Z] Copying: 385/1024 [MB] (29 MBps) [2024-11-26T21:00:09.601Z] Copying: 414/1024 [MB] (29 MBps) [2024-11-26T21:00:10.536Z] Copying: 443/1024 [MB] (29 MBps) [2024-11-26T21:00:11.471Z] Copying: 472/1024 [MB] (28 MBps) [2024-11-26T21:00:12.406Z] Copying: 501/1024 [MB] (28 MBps) [2024-11-26T21:00:13.342Z] Copying: 530/1024 [MB] (28 MBps) [2024-11-26T21:00:14.277Z] Copying: 559/1024 [MB] (28 MBps) [2024-11-26T21:00:15.656Z] Copying: 588/1024 [MB] (29 MBps) [2024-11-26T21:00:16.593Z] Copying: 617/1024 [MB] (29 MBps) [2024-11-26T21:00:17.527Z] Copying: 646/1024 [MB] (28 MBps) [2024-11-26T21:00:18.463Z] Copying: 674/1024 [MB] (28 MBps) [2024-11-26T21:00:19.403Z] Copying: 703/1024 [MB] (28 MBps) [2024-11-26T21:00:20.338Z] Copying: 732/1024 [MB] (28 MBps) [2024-11-26T21:00:21.274Z] Copying: 760/1024 [MB] (28 MBps) [2024-11-26T21:00:22.652Z] Copying: 790/1024 [MB] (29 MBps) [2024-11-26T21:00:23.591Z] Copying: 819/1024 [MB] (29 MBps) [2024-11-26T21:00:24.528Z] Copying: 849/1024 [MB] (29 MBps) [2024-11-26T21:00:25.465Z] Copying: 878/1024 [MB] (29 MBps) [2024-11-26T21:00:26.401Z] Copying: 905/1024 [MB] (26 MBps) [2024-11-26T21:00:27.410Z] Copying: 934/1024 [MB] (28 MBps) [2024-11-26T21:00:28.345Z] Copying: 962/1024 [MB] (27 MBps) [2024-11-26T21:00:29.282Z] Copying: 990/1024 [MB] (27 MBps) [2024-11-26T21:00:29.541Z] Copying: 1017/1024 [MB] (27 MBps) [2024-11-26T21:00:29.799Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-26 21:00:29.719579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.806 [2024-11-26 21:00:29.719701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:34.806 [2024-11-26 21:00:29.719727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:34.806 [2024-11-26 21:00:29.719745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.806 [2024-11-26 21:00:29.719785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:34.806 [2024-11-26 21:00:29.726859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.806 [2024-11-26 21:00:29.726926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:34.806 [2024-11-26 21:00:29.726946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:32:34.806 [2024-11-26 21:00:29.726964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.806 [2024-11-26 21:00:29.727450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.806 [2024-11-26 21:00:29.727472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:34.806 [2024-11-26 21:00:29.727491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:32:34.806 [2024-11-26 21:00:29.727508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.806 [2024-11-26 21:00:29.732213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.806 [2024-11-26 21:00:29.732279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:34.806 [2024-11-26 21:00:29.732313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:32:34.806 [2024-11-26 21:00:29.732358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.806 [2024-11-26 21:00:29.742280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.806 [2024-11-26 21:00:29.742537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:34.806 [2024-11-26 21:00:29.742570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.862 ms 00:32:34.806 [2024-11-26 21:00:29.742589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.805512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.065 [2024-11-26 21:00:29.805767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:35.065 [2024-11-26 21:00:29.805802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.769 ms 00:32:35.065 [2024-11-26 21:00:29.805819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.838049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.065 [2024-11-26 21:00:29.838094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:35.065 [2024-11-26 21:00:29.838109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.168 ms 00:32:35.065 [2024-11-26 21:00:29.838120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.840488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.065 [2024-11-26 21:00:29.840534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:35.065 [2024-11-26 21:00:29.840548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.309 ms 00:32:35.065 [2024-11-26 21:00:29.840559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.877954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.065 [2024-11-26 21:00:29.877992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:35.065 [2024-11-26 21:00:29.878005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.375 ms 00:32:35.065 [2024-11-26 21:00:29.878031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.913944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.065 [2024-11-26 21:00:29.913981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:35.065 [2024-11-26 21:00:29.913994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.871 ms 00:32:35.065 [2024-11-26 21:00:29.914019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.065 [2024-11-26 21:00:29.948496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.066 [2024-11-26 21:00:29.948659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:35.066 [2024-11-26 21:00:29.948695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.439 ms 00:32:35.066 [2024-11-26 21:00:29.948705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.066 [2024-11-26 21:00:29.982749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.066 [2024-11-26 21:00:29.982915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:35.066 [2024-11-26 21:00:29.982935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.923 ms 00:32:35.066 [2024-11-26 21:00:29.982947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.066 [2024-11-26 21:00:29.983004] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:35.066 [2024-11-26 21:00:29.983029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:35.066 [2024-11-26 21:00:29.983046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:35.066 [2024-11-26 21:00:29.983058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:35.066 [2024-11-26 21:00:29.983777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.983995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:35.067 [2024-11-26 21:00:29.984146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:35.067 [2024-11-26 21:00:29.984156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e81524cd-e20b-431d-be2e-32d90f4abaa5 00:32:35.067 [2024-11-26 21:00:29.984166] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:35.067 [2024-11-26 21:00:29.984176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:35.067 [2024-11-26 21:00:29.984187] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:35.067 [2024-11-26 21:00:29.984197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:35.067 [2024-11-26 21:00:29.984217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:35.067 [2024-11-26 21:00:29.984227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:35.067 [2024-11-26 21:00:29.984237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:35.067 [2024-11-26 21:00:29.984246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:35.067 [2024-11-26 21:00:29.984254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:35.067 [2024-11-26 21:00:29.984264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.067 [2024-11-26 21:00:29.984274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:35.067 [2024-11-26 21:00:29.984285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:32:35.067 [2024-11-26 21:00:29.984299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.067 [2024-11-26 21:00:30.004483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.067 [2024-11-26 21:00:30.004522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:35.067 [2024-11-26 21:00:30.004537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.149 ms 00:32:35.067 [2024-11-26 21:00:30.004560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.067 [2024-11-26 21:00:30.005173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.067 [2024-11-26 21:00:30.005198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:35.067 [2024-11-26 21:00:30.005209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:32:35.067 [2024-11-26 21:00:30.005219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.057791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.057828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:35.326 [2024-11-26 21:00:30.057842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.057854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.057908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.057924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:35.326 [2024-11-26 21:00:30.057935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.057946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.058026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.058041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:35.326 [2024-11-26 21:00:30.058052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.058062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.058081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.058092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:35.326 [2024-11-26 21:00:30.058107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.058118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.184557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.184628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:35.326 [2024-11-26 21:00:30.184647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.184659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:35.326 [2024-11-26 21:00:30.283224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:35.326 [2024-11-26 21:00:30.283353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:35.326 [2024-11-26 21:00:30.283420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:35.326 [2024-11-26 21:00:30.283564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:35.326 [2024-11-26 21:00:30.283669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:35.326 [2024-11-26 21:00:30.283743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:35.326 [2024-11-26 21:00:30.283825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:35.326 [2024-11-26 21:00:30.283835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:35.326 [2024-11-26 21:00:30.283849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.326 [2024-11-26 21:00:30.283970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 564.385 ms, result 0 00:32:36.703 00:32:36.703 00:32:36.703 21:00:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:38.078 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:38.078 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:38.078 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:38.078 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:38.078 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:38.336 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:38.595 Process with pid 81562 is not found 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81562 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81562 ']' 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81562 00:32:38.595 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81562) - No such process 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81562 is not found' 00:32:38.595 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:38.854 Remove shared memory files 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:38.854 ************************************ 00:32:38.854 END TEST ftl_dirty_shutdown 00:32:38.854 ************************************ 00:32:38.854 00:32:38.854 real 3m23.053s 00:32:38.854 user 3m50.817s 00:32:38.854 sys 0m37.622s 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:38.854 21:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:38.854 21:00:33 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:38.854 21:00:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:38.854 21:00:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:38.854 21:00:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:38.854 ************************************ 00:32:38.854 START TEST ftl_upgrade_shutdown 00:32:38.854 ************************************ 00:32:38.854 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:39.114 * Looking for test storage... 00:32:39.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.114 --rc genhtml_branch_coverage=1 00:32:39.114 --rc genhtml_function_coverage=1 00:32:39.114 --rc genhtml_legend=1 00:32:39.114 --rc geninfo_all_blocks=1 00:32:39.114 --rc geninfo_unexecuted_blocks=1 00:32:39.114 00:32:39.114 ' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.114 --rc genhtml_branch_coverage=1 00:32:39.114 --rc genhtml_function_coverage=1 00:32:39.114 --rc genhtml_legend=1 00:32:39.114 --rc geninfo_all_blocks=1 00:32:39.114 --rc geninfo_unexecuted_blocks=1 00:32:39.114 00:32:39.114 ' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.114 --rc genhtml_branch_coverage=1 00:32:39.114 --rc genhtml_function_coverage=1 00:32:39.114 --rc genhtml_legend=1 00:32:39.114 --rc geninfo_all_blocks=1 00:32:39.114 --rc geninfo_unexecuted_blocks=1 00:32:39.114 00:32:39.114 ' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.114 --rc genhtml_branch_coverage=1 00:32:39.114 --rc genhtml_function_coverage=1 00:32:39.114 --rc genhtml_legend=1 00:32:39.114 --rc geninfo_all_blocks=1 00:32:39.114 --rc geninfo_unexecuted_blocks=1 00:32:39.114 00:32:39.114 ' 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:39.114 21:00:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:39.114 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83753 00:32:39.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83753 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83753 ']' 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.115 21:00:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:39.374 [2024-11-26 21:00:34.175449] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:39.374 [2024-11-26 21:00:34.175910] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83753 ] 00:32:39.633 [2024-11-26 21:00:34.384915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.633 [2024-11-26 21:00:34.566104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:40.570 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:40.571 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:40.829 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:40.830 21:00:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:41.397 { 00:32:41.397 "name": "basen1", 00:32:41.397 "aliases": [ 00:32:41.397 "51102872-cc2c-4506-b71e-4fab9378c776" 00:32:41.397 ], 00:32:41.397 "product_name": "NVMe disk", 00:32:41.397 "block_size": 4096, 00:32:41.397 "num_blocks": 1310720, 00:32:41.397 "uuid": "51102872-cc2c-4506-b71e-4fab9378c776", 00:32:41.397 "numa_id": -1, 00:32:41.397 "assigned_rate_limits": { 00:32:41.397 "rw_ios_per_sec": 0, 00:32:41.397 "rw_mbytes_per_sec": 0, 00:32:41.397 "r_mbytes_per_sec": 0, 00:32:41.397 "w_mbytes_per_sec": 0 00:32:41.397 }, 00:32:41.397 "claimed": true, 00:32:41.397 "claim_type": "read_many_write_one", 00:32:41.397 "zoned": false, 00:32:41.397 "supported_io_types": { 00:32:41.397 "read": true, 00:32:41.397 "write": true, 00:32:41.397 "unmap": true, 00:32:41.397 "flush": true, 00:32:41.397 "reset": true, 00:32:41.397 "nvme_admin": true, 00:32:41.397 "nvme_io": true, 00:32:41.397 "nvme_io_md": false, 00:32:41.397 "write_zeroes": true, 00:32:41.397 "zcopy": false, 00:32:41.397 "get_zone_info": false, 00:32:41.397 "zone_management": false, 00:32:41.397 "zone_append": false, 00:32:41.397 "compare": true, 00:32:41.397 "compare_and_write": false, 00:32:41.397 "abort": true, 00:32:41.397 "seek_hole": false, 00:32:41.397 "seek_data": false, 00:32:41.397 "copy": true, 00:32:41.397 "nvme_iov_md": false 00:32:41.397 }, 00:32:41.397 "driver_specific": { 00:32:41.397 "nvme": [ 00:32:41.397 { 00:32:41.397 "pci_address": "0000:00:11.0", 00:32:41.397 "trid": { 00:32:41.397 "trtype": "PCIe", 00:32:41.397 "traddr": "0000:00:11.0" 00:32:41.397 }, 00:32:41.397 "ctrlr_data": { 00:32:41.397 "cntlid": 0, 00:32:41.397 "vendor_id": "0x1b36", 00:32:41.397 "model_number": "QEMU NVMe Ctrl", 00:32:41.397 "serial_number": "12341", 00:32:41.397 "firmware_revision": "8.0.0", 00:32:41.397 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:41.397 "oacs": { 00:32:41.397 "security": 0, 00:32:41.397 "format": 1, 00:32:41.397 "firmware": 0, 00:32:41.397 "ns_manage": 1 00:32:41.397 }, 00:32:41.397 "multi_ctrlr": false, 00:32:41.397 "ana_reporting": false 00:32:41.397 }, 00:32:41.397 "vs": { 00:32:41.397 "nvme_version": "1.4" 00:32:41.397 }, 00:32:41.397 "ns_data": { 00:32:41.397 "id": 1, 00:32:41.397 "can_share": false 00:32:41.397 } 00:32:41.397 } 00:32:41.397 ], 00:32:41.397 "mp_policy": "active_passive" 00:32:41.397 } 00:32:41.397 } 00:32:41.397 ]' 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:41.397 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:41.656 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=77c45368-d26f-4b14-8074-48b034327011 00:32:41.656 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:41.656 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 77c45368-d26f-4b14-8074-48b034327011 00:32:41.915 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:42.174 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=97b3433f-a4d5-48ac-87ff-71d25dc92b79 00:32:42.174 21:00:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 97b3433f-a4d5-48ac-87ff-71d25dc92b79 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ba272863-b07f-4ef5-90ef-822dd82deed5 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ba272863-b07f-4ef5-90ef-822dd82deed5 ]] 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ba272863-b07f-4ef5-90ef-822dd82deed5 5120 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ba272863-b07f-4ef5-90ef-822dd82deed5 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ba272863-b07f-4ef5-90ef-822dd82deed5 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ba272863-b07f-4ef5-90ef-822dd82deed5 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:42.174 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ba272863-b07f-4ef5-90ef-822dd82deed5 00:32:42.433 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:42.433 { 00:32:42.433 "name": "ba272863-b07f-4ef5-90ef-822dd82deed5", 00:32:42.433 "aliases": [ 00:32:42.433 "lvs/basen1p0" 00:32:42.433 ], 00:32:42.433 "product_name": "Logical Volume", 00:32:42.433 "block_size": 4096, 00:32:42.433 "num_blocks": 5242880, 00:32:42.433 "uuid": "ba272863-b07f-4ef5-90ef-822dd82deed5", 00:32:42.433 "assigned_rate_limits": { 00:32:42.433 "rw_ios_per_sec": 0, 00:32:42.433 "rw_mbytes_per_sec": 0, 00:32:42.433 "r_mbytes_per_sec": 0, 00:32:42.433 "w_mbytes_per_sec": 0 00:32:42.433 }, 00:32:42.433 "claimed": false, 00:32:42.433 "zoned": false, 00:32:42.433 "supported_io_types": { 00:32:42.433 "read": true, 00:32:42.433 "write": true, 00:32:42.433 "unmap": true, 00:32:42.433 "flush": false, 00:32:42.433 "reset": true, 00:32:42.433 "nvme_admin": false, 00:32:42.433 "nvme_io": false, 00:32:42.433 "nvme_io_md": false, 00:32:42.433 "write_zeroes": true, 00:32:42.433 "zcopy": false, 00:32:42.433 "get_zone_info": false, 00:32:42.433 "zone_management": false, 00:32:42.433 "zone_append": false, 00:32:42.433 "compare": false, 00:32:42.433 "compare_and_write": false, 00:32:42.433 "abort": false, 00:32:42.433 "seek_hole": true, 00:32:42.433 "seek_data": true, 00:32:42.433 "copy": false, 00:32:42.433 "nvme_iov_md": false 00:32:42.433 }, 00:32:42.433 "driver_specific": { 00:32:42.433 "lvol": { 00:32:42.433 "lvol_store_uuid": "97b3433f-a4d5-48ac-87ff-71d25dc92b79", 00:32:42.433 "base_bdev": "basen1", 00:32:42.433 "thin_provision": true, 00:32:42.433 "num_allocated_clusters": 0, 00:32:42.433 "snapshot": false, 00:32:42.433 "clone": false, 00:32:42.433 "esnap_clone": false 00:32:42.433 } 00:32:42.433 } 00:32:42.433 } 00:32:42.433 ]' 00:32:42.433 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:42.691 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:42.949 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:42.949 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:42.949 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:43.208 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:43.208 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:43.208 21:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ba272863-b07f-4ef5-90ef-822dd82deed5 -c cachen1p0 --l2p_dram_limit 2 00:32:43.468 [2024-11-26 21:00:38.243678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.243734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:43.468 [2024-11-26 21:00:38.243754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:43.468 [2024-11-26 21:00:38.243765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.243837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.243850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:43.468 [2024-11-26 21:00:38.243863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:32:43.468 [2024-11-26 21:00:38.243873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.243897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:43.468 [2024-11-26 21:00:38.244936] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:43.468 [2024-11-26 21:00:38.244965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.244976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:43.468 [2024-11-26 21:00:38.245008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.070 ms 00:32:43.468 [2024-11-26 21:00:38.245019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.245063] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2ecaea59-977f-4218-9e67-d93680c5cb2f 00:32:43.468 [2024-11-26 21:00:38.246587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.246632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:43.468 [2024-11-26 21:00:38.246645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:43.468 [2024-11-26 21:00:38.246658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.254415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.254640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:43.468 [2024-11-26 21:00:38.254664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.673 ms 00:32:43.468 [2024-11-26 21:00:38.254678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.254735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.254751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:43.468 [2024-11-26 21:00:38.254762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:43.468 [2024-11-26 21:00:38.254778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.254842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.254857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:43.468 [2024-11-26 21:00:38.254871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:43.468 [2024-11-26 21:00:38.254887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.254914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:43.468 [2024-11-26 21:00:38.260184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.260213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:43.468 [2024-11-26 21:00:38.260229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.275 ms 00:32:43.468 [2024-11-26 21:00:38.260256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.260287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.468 [2024-11-26 21:00:38.260297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:43.468 [2024-11-26 21:00:38.260311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:43.468 [2024-11-26 21:00:38.260321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.468 [2024-11-26 21:00:38.260366] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:43.468 [2024-11-26 21:00:38.260493] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:43.468 [2024-11-26 21:00:38.260513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:43.468 [2024-11-26 21:00:38.260528] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:43.468 [2024-11-26 21:00:38.260543] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:43.468 [2024-11-26 21:00:38.260556] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:43.468 [2024-11-26 21:00:38.260569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:43.468 [2024-11-26 21:00:38.260582] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:43.468 [2024-11-26 21:00:38.260595] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:43.468 [2024-11-26 21:00:38.260604] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:43.469 [2024-11-26 21:00:38.260618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.469 [2024-11-26 21:00:38.260637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:43.469 [2024-11-26 21:00:38.260651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:32:43.469 [2024-11-26 21:00:38.260673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.469 [2024-11-26 21:00:38.260745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.469 [2024-11-26 21:00:38.260767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:43.469 [2024-11-26 21:00:38.260780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:32:43.469 [2024-11-26 21:00:38.260789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.469 [2024-11-26 21:00:38.260884] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:43.469 [2024-11-26 21:00:38.260898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:43.469 [2024-11-26 21:00:38.260911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:43.469 [2024-11-26 21:00:38.260920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.260933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:43.469 [2024-11-26 21:00:38.260941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.260953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:43.469 [2024-11-26 21:00:38.260962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:43.469 [2024-11-26 21:00:38.260973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:43.469 [2024-11-26 21:00:38.260982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.260992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:43.469 [2024-11-26 21:00:38.261003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:43.469 [2024-11-26 21:00:38.261014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:43.469 [2024-11-26 21:00:38.261035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:43.469 [2024-11-26 21:00:38.261044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:43.469 [2024-11-26 21:00:38.261066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:43.469 [2024-11-26 21:00:38.261078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:43.469 [2024-11-26 21:00:38.261098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:43.469 [2024-11-26 21:00:38.261127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:43.469 [2024-11-26 21:00:38.261158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:43.469 [2024-11-26 21:00:38.261187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:43.469 [2024-11-26 21:00:38.261220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:43.469 [2024-11-26 21:00:38.261248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:43.469 [2024-11-26 21:00:38.261278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:43.469 [2024-11-26 21:00:38.261306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:43.469 [2024-11-26 21:00:38.261316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:43.469 [2024-11-26 21:00:38.261337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:43.469 [2024-11-26 21:00:38.261347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:43.469 [2024-11-26 21:00:38.261369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:43.469 [2024-11-26 21:00:38.261383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:43.469 [2024-11-26 21:00:38.261391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:43.469 [2024-11-26 21:00:38.261403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:43.469 [2024-11-26 21:00:38.261411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:43.469 [2024-11-26 21:00:38.261422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:43.469 [2024-11-26 21:00:38.261436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:43.469 [2024-11-26 21:00:38.261453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:43.469 [2024-11-26 21:00:38.261480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:43.469 [2024-11-26 21:00:38.261512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:43.469 [2024-11-26 21:00:38.261524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:43.469 [2024-11-26 21:00:38.261533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:43.469 [2024-11-26 21:00:38.261545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:43.469 [2024-11-26 21:00:38.261871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:43.469 [2024-11-26 21:00:38.261943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.261994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:43.469 [2024-11-26 21:00:38.262046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:43.469 [2024-11-26 21:00:38.262095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:43.469 [2024-11-26 21:00:38.262147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:43.469 [2024-11-26 21:00:38.262330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:43.469 [2024-11-26 21:00:38.262468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:43.469 [2024-11-26 21:00:38.262507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.501 ms 00:32:43.469 [2024-11-26 21:00:38.262540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:43.469 [2024-11-26 21:00:38.262640] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:43.469 [2024-11-26 21:00:38.262758] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:48.751 [2024-11-26 21:00:43.594682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.594956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:48.751 [2024-11-26 21:00:43.595064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5332.014 ms 00:32:48.751 [2024-11-26 21:00:43.595106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.633131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.633365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:48.751 [2024-11-26 21:00:43.633476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.604 ms 00:32:48.751 [2024-11-26 21:00:43.633518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.633650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.633935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:48.751 [2024-11-26 21:00:43.633954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:48.751 [2024-11-26 21:00:43.633974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.680403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.680586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:48.751 [2024-11-26 21:00:43.680608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.372 ms 00:32:48.751 [2024-11-26 21:00:43.680636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.680675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.680689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:48.751 [2024-11-26 21:00:43.680700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:48.751 [2024-11-26 21:00:43.680712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.681212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.681229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:48.751 [2024-11-26 21:00:43.681250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.429 ms 00:32:48.751 [2024-11-26 21:00:43.681264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.681301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.681319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:48.751 [2024-11-26 21:00:43.681330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:48.751 [2024-11-26 21:00:43.681345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.702041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.702082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:48.751 [2024-11-26 21:00:43.702096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.676 ms 00:32:48.751 [2024-11-26 21:00:43.702110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:48.751 [2024-11-26 21:00:43.726282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:48.751 [2024-11-26 21:00:43.727352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:48.751 [2024-11-26 21:00:43.727380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:48.751 [2024-11-26 21:00:43.727396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.160 ms 00:32:48.751 [2024-11-26 21:00:43.727407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.009 [2024-11-26 21:00:43.771278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.771460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:49.010 [2024-11-26 21:00:43.771489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.836 ms 00:32:49.010 [2024-11-26 21:00:43.771501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.010 [2024-11-26 21:00:43.771622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.771659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:49.010 [2024-11-26 21:00:43.771678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:32:49.010 [2024-11-26 21:00:43.771689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.010 [2024-11-26 21:00:43.806291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.806330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:49.010 [2024-11-26 21:00:43.806347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.546 ms 00:32:49.010 [2024-11-26 21:00:43.806357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.010 [2024-11-26 21:00:43.841177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.841213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:49.010 [2024-11-26 21:00:43.841229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.772 ms 00:32:49.010 [2024-11-26 21:00:43.841238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.010 [2024-11-26 21:00:43.841910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.841933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:49.010 [2024-11-26 21:00:43.841951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:32:49.010 [2024-11-26 21:00:43.841961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.010 [2024-11-26 21:00:43.968105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.010 [2024-11-26 21:00:43.968151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:49.010 [2024-11-26 21:00:43.968173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 126.086 ms 00:32:49.010 [2024-11-26 21:00:43.968184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.268 [2024-11-26 21:00:44.004452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.268 [2024-11-26 21:00:44.004516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:49.268 [2024-11-26 21:00:44.004555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.198 ms 00:32:49.268 [2024-11-26 21:00:44.004566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.268 [2024-11-26 21:00:44.040173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.268 [2024-11-26 21:00:44.040211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:49.268 [2024-11-26 21:00:44.040227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.572 ms 00:32:49.268 [2024-11-26 21:00:44.040238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.268 [2024-11-26 21:00:44.075566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.268 [2024-11-26 21:00:44.075604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:49.268 [2024-11-26 21:00:44.075631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.298 ms 00:32:49.268 [2024-11-26 21:00:44.075641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.268 [2024-11-26 21:00:44.075673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.268 [2024-11-26 21:00:44.075684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:49.268 [2024-11-26 21:00:44.075700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:49.268 [2024-11-26 21:00:44.075710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.268 [2024-11-26 21:00:44.075810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:49.268 [2024-11-26 21:00:44.075825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:49.268 [2024-11-26 21:00:44.075838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:49.269 [2024-11-26 21:00:44.075847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:49.269 [2024-11-26 21:00:44.076916] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5832.765 ms, result 0 00:32:49.269 { 00:32:49.269 "name": "ftl", 00:32:49.269 "uuid": "2ecaea59-977f-4218-9e67-d93680c5cb2f" 00:32:49.269 } 00:32:49.269 21:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:49.527 [2024-11-26 21:00:44.288120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.527 21:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:49.786 21:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:49.786 [2024-11-26 21:00:44.704383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:49.786 21:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:50.044 [2024-11-26 21:00:44.885975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:50.044 21:00:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:50.303 Fill FTL, iteration 1 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83899 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83899 /var/tmp/spdk.tgt.sock 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83899 ']' 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:50.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.303 21:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:50.561 [2024-11-26 21:00:45.313568] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:50.561 [2024-11-26 21:00:45.313959] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83899 ] 00:32:50.561 [2024-11-26 21:00:45.501990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.820 [2024-11-26 21:00:45.683132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.754 21:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.754 21:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:51.754 21:00:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:52.013 ftln1 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83899 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83899 ']' 00:32:52.271 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83899 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83899 00:32:52.529 killing process with pid 83899 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83899' 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83899 00:32:52.529 21:00:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83899 00:32:55.066 21:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:55.066 21:00:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:55.066 [2024-11-26 21:00:49.917174] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:32:55.066 [2024-11-26 21:00:49.917351] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83957 ] 00:32:55.323 [2024-11-26 21:00:50.103499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.323 [2024-11-26 21:00:50.239086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.222  [2024-11-26T21:00:52.782Z] Copying: 265/1024 [MB] (265 MBps) [2024-11-26T21:00:54.157Z] Copying: 521/1024 [MB] (256 MBps) [2024-11-26T21:00:55.091Z] Copying: 782/1024 [MB] (261 MBps) [2024-11-26T21:00:56.027Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:33:01.033 00:33:01.033 Calculate MD5 checksum, iteration 1 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:01.033 21:00:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:01.291 [2024-11-26 21:00:56.091519] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:01.291 [2024-11-26 21:00:56.091941] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84021 ] 00:33:01.291 [2024-11-26 21:00:56.278710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.549 [2024-11-26 21:00:56.411248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:03.451  [2024-11-26T21:00:58.702Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-26T21:00:59.635Z] Copying: 1024/1024 [MB] (average 600 MBps) 00:33:04.641 00:33:04.899 21:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:04.899 21:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:06.797 Fill FTL, iteration 2 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=47a9280c1101feb1378814e4255b60c4 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:06.797 21:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:06.797 [2024-11-26 21:01:01.524256] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:06.797 [2024-11-26 21:01:01.524429] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84085 ] 00:33:06.797 [2024-11-26 21:01:01.720705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.056 [2024-11-26 21:01:01.884672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.433  [2024-11-26T21:01:04.805Z] Copying: 255/1024 [MB] (255 MBps) [2024-11-26T21:01:05.781Z] Copying: 506/1024 [MB] (251 MBps) [2024-11-26T21:01:06.728Z] Copying: 755/1024 [MB] (249 MBps) [2024-11-26T21:01:06.728Z] Copying: 1003/1024 [MB] (248 MBps) [2024-11-26T21:01:08.102Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:33:13.108 00:33:13.108 Calculate MD5 checksum, iteration 2 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:13.108 21:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:13.108 [2024-11-26 21:01:07.880574] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:13.108 [2024-11-26 21:01:07.881769] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84149 ] 00:33:13.109 [2024-11-26 21:01:08.068190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.367 [2024-11-26 21:01:08.200666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.268  [2024-11-26T21:01:10.827Z] Copying: 640/1024 [MB] (640 MBps) [2024-11-26T21:01:12.201Z] Copying: 1024/1024 [MB] (average 639 MBps) 00:33:17.207 00:33:17.207 21:01:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:17.207 21:01:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:19.110 21:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:19.110 21:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d5c2c19767858f729c9bbb48b4e375ae 00:33:19.110 21:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:19.110 21:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:19.110 21:01:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:19.110 [2024-11-26 21:01:14.091716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.110 [2024-11-26 21:01:14.091762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:19.110 [2024-11-26 21:01:14.091778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:19.110 [2024-11-26 21:01:14.091789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.110 [2024-11-26 21:01:14.091814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.110 [2024-11-26 21:01:14.091830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:19.110 [2024-11-26 21:01:14.091840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:19.110 [2024-11-26 21:01:14.091850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.110 [2024-11-26 21:01:14.091870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.110 [2024-11-26 21:01:14.091880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:19.110 [2024-11-26 21:01:14.091890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:19.110 [2024-11-26 21:01:14.091900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.110 [2024-11-26 21:01:14.091962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.233 ms, result 0 00:33:19.110 true 00:33:19.369 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:19.369 { 00:33:19.369 "name": "ftl", 00:33:19.369 "properties": [ 00:33:19.369 { 00:33:19.369 "name": "superblock_version", 00:33:19.369 "value": 5, 00:33:19.369 "read-only": true 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "name": "base_device", 00:33:19.369 "bands": [ 00:33:19.369 { 00:33:19.369 "id": 0, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 1, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 2, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 3, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 4, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 5, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 6, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 7, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 8, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 9, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 10, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 11, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 12, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 13, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 14, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 15, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 16, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 17, 00:33:19.369 "state": "FREE", 00:33:19.369 "validity": 0.0 00:33:19.369 } 00:33:19.369 ], 00:33:19.369 "read-only": true 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "name": "cache_device", 00:33:19.369 "type": "bdev", 00:33:19.369 "chunks": [ 00:33:19.369 { 00:33:19.369 "id": 0, 00:33:19.369 "state": "INACTIVE", 00:33:19.369 "utilization": 0.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 1, 00:33:19.369 "state": "CLOSED", 00:33:19.369 "utilization": 1.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 2, 00:33:19.369 "state": "CLOSED", 00:33:19.369 "utilization": 1.0 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 3, 00:33:19.369 "state": "OPEN", 00:33:19.369 "utilization": 0.001953125 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "id": 4, 00:33:19.369 "state": "OPEN", 00:33:19.369 "utilization": 0.0 00:33:19.369 } 00:33:19.369 ], 00:33:19.369 "read-only": true 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "name": "verbose_mode", 00:33:19.369 "value": true, 00:33:19.369 "unit": "", 00:33:19.369 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:19.369 }, 00:33:19.369 { 00:33:19.369 "name": "prep_upgrade_on_shutdown", 00:33:19.369 "value": false, 00:33:19.369 "unit": "", 00:33:19.369 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:19.369 } 00:33:19.369 ] 00:33:19.369 } 00:33:19.369 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:19.628 [2024-11-26 21:01:14.520052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.628 [2024-11-26 21:01:14.520220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:19.628 [2024-11-26 21:01:14.520325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:19.628 [2024-11-26 21:01:14.520364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.628 [2024-11-26 21:01:14.520422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.628 [2024-11-26 21:01:14.520455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:19.628 [2024-11-26 21:01:14.520540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:19.628 [2024-11-26 21:01:14.520575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.628 [2024-11-26 21:01:14.520639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:19.628 [2024-11-26 21:01:14.520676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:19.628 [2024-11-26 21:01:14.520751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:19.628 [2024-11-26 21:01:14.520786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:19.628 [2024-11-26 21:01:14.520872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.802 ms, result 0 00:33:19.628 true 00:33:19.628 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:19.628 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:19.628 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:19.887 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:19.887 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:19.887 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:20.145 [2024-11-26 21:01:14.944416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.145 [2024-11-26 21:01:14.944457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:20.145 [2024-11-26 21:01:14.944471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:20.145 [2024-11-26 21:01:14.944492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.145 [2024-11-26 21:01:14.944517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.145 [2024-11-26 21:01:14.944528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:20.145 [2024-11-26 21:01:14.944538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:20.145 [2024-11-26 21:01:14.944547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.145 [2024-11-26 21:01:14.944567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:20.145 [2024-11-26 21:01:14.944577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:20.145 [2024-11-26 21:01:14.944588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:20.145 [2024-11-26 21:01:14.944597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:20.145 [2024-11-26 21:01:14.944672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:33:20.145 true 00:33:20.145 21:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:20.404 { 00:33:20.404 "name": "ftl", 00:33:20.404 "properties": [ 00:33:20.404 { 00:33:20.404 "name": "superblock_version", 00:33:20.404 "value": 5, 00:33:20.404 "read-only": true 00:33:20.404 }, 00:33:20.404 { 00:33:20.404 "name": "base_device", 00:33:20.404 "bands": [ 00:33:20.404 { 00:33:20.404 "id": 0, 00:33:20.404 "state": "FREE", 00:33:20.404 "validity": 0.0 00:33:20.404 }, 00:33:20.404 { 00:33:20.404 "id": 1, 00:33:20.404 "state": "FREE", 00:33:20.404 "validity": 0.0 00:33:20.404 }, 00:33:20.404 { 00:33:20.404 "id": 2, 00:33:20.404 "state": "FREE", 00:33:20.404 "validity": 0.0 00:33:20.404 }, 00:33:20.404 { 00:33:20.404 "id": 3, 00:33:20.404 "state": "FREE", 00:33:20.404 "validity": 0.0 00:33:20.404 }, 00:33:20.404 { 00:33:20.405 "id": 4, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 5, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 6, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 7, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 8, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 9, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 10, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 11, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 12, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 13, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 14, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 15, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 16, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 17, 00:33:20.405 "state": "FREE", 00:33:20.405 "validity": 0.0 00:33:20.405 } 00:33:20.405 ], 00:33:20.405 "read-only": true 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "name": "cache_device", 00:33:20.405 "type": "bdev", 00:33:20.405 "chunks": [ 00:33:20.405 { 00:33:20.405 "id": 0, 00:33:20.405 "state": "INACTIVE", 00:33:20.405 "utilization": 0.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 1, 00:33:20.405 "state": "CLOSED", 00:33:20.405 "utilization": 1.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 2, 00:33:20.405 "state": "CLOSED", 00:33:20.405 "utilization": 1.0 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 3, 00:33:20.405 "state": "OPEN", 00:33:20.405 "utilization": 0.001953125 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "id": 4, 00:33:20.405 "state": "OPEN", 00:33:20.405 "utilization": 0.0 00:33:20.405 } 00:33:20.405 ], 00:33:20.405 "read-only": true 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "name": "verbose_mode", 00:33:20.405 "value": true, 00:33:20.405 "unit": "", 00:33:20.405 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:20.405 }, 00:33:20.405 { 00:33:20.405 "name": "prep_upgrade_on_shutdown", 00:33:20.405 "value": true, 00:33:20.405 "unit": "", 00:33:20.405 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:20.405 } 00:33:20.405 ] 00:33:20.405 } 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83753 ]] 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83753 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83753 ']' 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83753 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83753 00:33:20.405 killing process with pid 83753 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83753' 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83753 00:33:20.405 21:01:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83753 00:33:21.341 [2024-11-26 21:01:16.298172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:21.341 [2024-11-26 21:01:16.315061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.341 [2024-11-26 21:01:16.315101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:21.341 [2024-11-26 21:01:16.315116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:21.341 [2024-11-26 21:01:16.315126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:21.341 [2024-11-26 21:01:16.315148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:21.341 [2024-11-26 21:01:16.319247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:21.341 [2024-11-26 21:01:16.319277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:21.341 [2024-11-26 21:01:16.319289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.084 ms 00:33:21.341 [2024-11-26 21:01:16.319304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.640557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.640807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:29.460 [2024-11-26 21:01:23.640840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7321.188 ms 00:33:29.460 [2024-11-26 21:01:23.640852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.641954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.641992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:29.460 [2024-11-26 21:01:23.642004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:33:29.460 [2024-11-26 21:01:23.642014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.642940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.642954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:29.460 [2024-11-26 21:01:23.642966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.908 ms 00:33:29.460 [2024-11-26 21:01:23.642981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.657702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.657738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:29.460 [2024-11-26 21:01:23.657751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.689 ms 00:33:29.460 [2024-11-26 21:01:23.657762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.666813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.666850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:29.460 [2024-11-26 21:01:23.666863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.029 ms 00:33:29.460 [2024-11-26 21:01:23.666873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.666949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.666967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:29.460 [2024-11-26 21:01:23.666978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:29.460 [2024-11-26 21:01:23.666989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.681855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.682008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:29.460 [2024-11-26 21:01:23.682028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.849 ms 00:33:29.460 [2024-11-26 21:01:23.682038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.696442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.696477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:29.460 [2024-11-26 21:01:23.696490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.377 ms 00:33:29.460 [2024-11-26 21:01:23.696500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.710430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.710464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:29.460 [2024-11-26 21:01:23.710476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.908 ms 00:33:29.460 [2024-11-26 21:01:23.710485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.460 [2024-11-26 21:01:23.724393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.460 [2024-11-26 21:01:23.724536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:29.461 [2024-11-26 21:01:23.724556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.858 ms 00:33:29.461 [2024-11-26 21:01:23.724566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.724589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:29.461 [2024-11-26 21:01:23.724631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:29.461 [2024-11-26 21:01:23.724644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:29.461 [2024-11-26 21:01:23.724655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:29.461 [2024-11-26 21:01:23.724666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:29.461 [2024-11-26 21:01:23.724825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:29.461 [2024-11-26 21:01:23.724835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2ecaea59-977f-4218-9e67-d93680c5cb2f 00:33:29.461 [2024-11-26 21:01:23.724846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:29.461 [2024-11-26 21:01:23.724856] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:29.461 [2024-11-26 21:01:23.724865] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:29.461 [2024-11-26 21:01:23.724875] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:29.461 [2024-11-26 21:01:23.724896] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:29.461 [2024-11-26 21:01:23.724906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:29.461 [2024-11-26 21:01:23.724920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:29.461 [2024-11-26 21:01:23.724932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:29.461 [2024-11-26 21:01:23.724942] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:29.461 [2024-11-26 21:01:23.724953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.461 [2024-11-26 21:01:23.724963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:29.461 [2024-11-26 21:01:23.724973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:33:29.461 [2024-11-26 21:01:23.724983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.744699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.461 [2024-11-26 21:01:23.744843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:29.461 [2024-11-26 21:01:23.744969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.684 ms 00:33:29.461 [2024-11-26 21:01:23.745007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.745611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.461 [2024-11-26 21:01:23.745738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:29.461 [2024-11-26 21:01:23.745807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:33:29.461 [2024-11-26 21:01:23.745867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.809092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:23.809243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:29.461 [2024-11-26 21:01:23.809385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:23.809423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.809476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:23.809508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:29.461 [2024-11-26 21:01:23.809589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:23.809653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.809783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:23.809898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:29.461 [2024-11-26 21:01:23.809935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:23.810008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.810055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:23.810087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:29.461 [2024-11-26 21:01:23.810117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:23.810146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:23.934223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:23.934399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:29.461 [2024-11-26 21:01:23.934495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:23.934541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.032939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.033131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:29.461 [2024-11-26 21:01:24.033208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.033243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.033378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.033412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:29.461 [2024-11-26 21:01:24.033441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.033531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.033632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.033687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:29.461 [2024-11-26 21:01:24.033769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.033804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.033953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.034087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:29.461 [2024-11-26 21:01:24.034159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.034193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.034269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.034304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:29.461 [2024-11-26 21:01:24.034334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.034543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.034628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.034665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:29.461 [2024-11-26 21:01:24.034696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.034725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.034808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:29.461 [2024-11-26 21:01:24.034938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:29.461 [2024-11-26 21:01:24.034974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:29.461 [2024-11-26 21:01:24.035004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.461 [2024-11-26 21:01:24.035170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7720.038 ms, result 0 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84354 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84354 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84354 ']' 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:33.654 21:01:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:33.654 [2024-11-26 21:01:27.946532] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:33.654 [2024-11-26 21:01:27.946771] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84354 ] 00:33:33.654 [2024-11-26 21:01:28.127841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.654 [2024-11-26 21:01:28.236294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.221 [2024-11-26 21:01:29.171854] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:34.221 [2024-11-26 21:01:29.171913] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:34.480 [2024-11-26 21:01:29.318791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.319016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:34.480 [2024-11-26 21:01:29.319133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:34.480 [2024-11-26 21:01:29.319173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.319274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.319313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:34.480 [2024-11-26 21:01:29.319344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:33:34.480 [2024-11-26 21:01:29.319438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.319477] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:34.480 [2024-11-26 21:01:29.320529] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:34.480 [2024-11-26 21:01:29.320553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.320564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:34.480 [2024-11-26 21:01:29.320575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.083 ms 00:33:34.480 [2024-11-26 21:01:29.320585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.322165] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:34.480 [2024-11-26 21:01:29.340867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.341027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:34.480 [2024-11-26 21:01:29.341147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.702 ms 00:33:34.480 [2024-11-26 21:01:29.341186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.341273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.341313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:34.480 [2024-11-26 21:01:29.341405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:34.480 [2024-11-26 21:01:29.341439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.348547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.348725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:34.480 [2024-11-26 21:01:29.348823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.998 ms 00:33:34.480 [2024-11-26 21:01:29.348860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.348970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.349066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:34.480 [2024-11-26 21:01:29.349103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:33:34.480 [2024-11-26 21:01:29.349133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.480 [2024-11-26 21:01:29.349249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.480 [2024-11-26 21:01:29.349349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:34.480 [2024-11-26 21:01:29.349442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:33:34.480 [2024-11-26 21:01:29.349479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.481 [2024-11-26 21:01:29.349536] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:34.481 [2024-11-26 21:01:29.354264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.481 [2024-11-26 21:01:29.354296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:34.481 [2024-11-26 21:01:29.354312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.737 ms 00:33:34.481 [2024-11-26 21:01:29.354322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.481 [2024-11-26 21:01:29.354356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.481 [2024-11-26 21:01:29.354367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:34.481 [2024-11-26 21:01:29.354377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:34.481 [2024-11-26 21:01:29.354387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.481 [2024-11-26 21:01:29.354441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:34.481 [2024-11-26 21:01:29.354467] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:34.481 [2024-11-26 21:01:29.354512] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:34.481 [2024-11-26 21:01:29.354531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:34.481 [2024-11-26 21:01:29.354752] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:34.481 [2024-11-26 21:01:29.354817] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:34.481 [2024-11-26 21:01:29.354866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:34.481 [2024-11-26 21:01:29.354964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:34.481 [2024-11-26 21:01:29.355106] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:34.481 [2024-11-26 21:01:29.355116] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:34.481 [2024-11-26 21:01:29.355126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:34.481 [2024-11-26 21:01:29.355138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.481 [2024-11-26 21:01:29.355148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:34.481 [2024-11-26 21:01:29.355160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.699 ms 00:33:34.481 [2024-11-26 21:01:29.355170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.481 [2024-11-26 21:01:29.355248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.481 [2024-11-26 21:01:29.355259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:34.481 [2024-11-26 21:01:29.355274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:34.481 [2024-11-26 21:01:29.355284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.481 [2024-11-26 21:01:29.355377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:34.481 [2024-11-26 21:01:29.355390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:34.481 [2024-11-26 21:01:29.355401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:34.481 [2024-11-26 21:01:29.355432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:34.481 [2024-11-26 21:01:29.355451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:34.481 [2024-11-26 21:01:29.355461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:34.481 [2024-11-26 21:01:29.355470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:34.481 [2024-11-26 21:01:29.355490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:34.481 [2024-11-26 21:01:29.355499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:34.481 [2024-11-26 21:01:29.355518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:34.481 [2024-11-26 21:01:29.355527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:34.481 [2024-11-26 21:01:29.355565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:34.481 [2024-11-26 21:01:29.355581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:34.481 [2024-11-26 21:01:29.355627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:34.481 [2024-11-26 21:01:29.355681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:34.481 [2024-11-26 21:01:29.355711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:34.481 [2024-11-26 21:01:29.355739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:34.481 [2024-11-26 21:01:29.355768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:34.481 [2024-11-26 21:01:29.355795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:34.481 [2024-11-26 21:01:29.355823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:34.481 [2024-11-26 21:01:29.355851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:34.481 [2024-11-26 21:01:29.355860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355868] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:34.481 [2024-11-26 21:01:29.355878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:34.481 [2024-11-26 21:01:29.355888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:34.481 [2024-11-26 21:01:29.355912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:34.481 [2024-11-26 21:01:29.355922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:34.481 [2024-11-26 21:01:29.355933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:34.481 [2024-11-26 21:01:29.355943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:34.481 [2024-11-26 21:01:29.355952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:34.481 [2024-11-26 21:01:29.355972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:34.481 [2024-11-26 21:01:29.355983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:34.482 [2024-11-26 21:01:29.355996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:34.482 [2024-11-26 21:01:29.356017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:34.482 [2024-11-26 21:01:29.356048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:34.482 [2024-11-26 21:01:29.356059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:34.482 [2024-11-26 21:01:29.356069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:34.482 [2024-11-26 21:01:29.356079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:34.482 [2024-11-26 21:01:29.356147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:34.482 [2024-11-26 21:01:29.356158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:34.482 [2024-11-26 21:01:29.356178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:34.482 [2024-11-26 21:01:29.356188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:34.482 [2024-11-26 21:01:29.356198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:34.482 [2024-11-26 21:01:29.356209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.482 [2024-11-26 21:01:29.356219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:34.482 [2024-11-26 21:01:29.356228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.886 ms 00:33:34.482 [2024-11-26 21:01:29.356238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.482 [2024-11-26 21:01:29.356284] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:34.482 [2024-11-26 21:01:29.356300] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:38.673 [2024-11-26 21:01:32.997704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:32.997916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:38.673 [2024-11-26 21:01:32.998031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3641.399 ms 00:33:38.673 [2024-11-26 21:01:32.998071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.035336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.035532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:38.673 [2024-11-26 21:01:33.035708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.847 ms 00:33:38.673 [2024-11-26 21:01:33.035758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.035953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.035995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:38.673 [2024-11-26 21:01:33.036077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:38.673 [2024-11-26 21:01:33.036112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.082719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.082874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:38.673 [2024-11-26 21:01:33.082968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.504 ms 00:33:38.673 [2024-11-26 21:01:33.083005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.083067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.083100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:38.673 [2024-11-26 21:01:33.083131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:38.673 [2024-11-26 21:01:33.083161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.083795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.083917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:38.673 [2024-11-26 21:01:33.083996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.488 ms 00:33:38.673 [2024-11-26 21:01:33.084039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.084111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.084377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:38.673 [2024-11-26 21:01:33.084415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:38.673 [2024-11-26 21:01:33.084445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.104452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.104598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:38.673 [2024-11-26 21:01:33.104764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.957 ms 00:33:38.673 [2024-11-26 21:01:33.104802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.142301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:38.673 [2024-11-26 21:01:33.142458] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:38.673 [2024-11-26 21:01:33.142574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.142607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:38.673 [2024-11-26 21:01:33.142649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.628 ms 00:33:38.673 [2024-11-26 21:01:33.142678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.162032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.162164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:38.673 [2024-11-26 21:01:33.162273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.289 ms 00:33:38.673 [2024-11-26 21:01:33.162309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.179932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.180076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:38.673 [2024-11-26 21:01:33.180176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.562 ms 00:33:38.673 [2024-11-26 21:01:33.180212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.197403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.197550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:38.673 [2024-11-26 21:01:33.197683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.130 ms 00:33:38.673 [2024-11-26 21:01:33.197721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.198431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.198549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:38.673 [2024-11-26 21:01:33.198628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.579 ms 00:33:38.673 [2024-11-26 21:01:33.198682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.285540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.285769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:38.673 [2024-11-26 21:01:33.285873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.807 ms 00:33:38.673 [2024-11-26 21:01:33.285909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.296617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:38.673 [2024-11-26 21:01:33.297678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.297795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:38.673 [2024-11-26 21:01:33.297817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.675 ms 00:33:38.673 [2024-11-26 21:01:33.297828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.297950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.297965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:38.673 [2024-11-26 21:01:33.297978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:38.673 [2024-11-26 21:01:33.297988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.298055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.298068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:38.673 [2024-11-26 21:01:33.298080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:38.673 [2024-11-26 21:01:33.298090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.298115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.298125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:38.673 [2024-11-26 21:01:33.298140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:38.673 [2024-11-26 21:01:33.298150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.298186] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:38.673 [2024-11-26 21:01:33.298199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.298209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:38.673 [2024-11-26 21:01:33.298219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:38.673 [2024-11-26 21:01:33.298229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.333486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.333652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:38.673 [2024-11-26 21:01:33.333736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.234 ms 00:33:38.673 [2024-11-26 21:01:33.333774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.333915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.673 [2024-11-26 21:01:33.333956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:38.673 [2024-11-26 21:01:33.333988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:33:38.673 [2024-11-26 21:01:33.334017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.673 [2024-11-26 21:01:33.335261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4016.006 ms, result 0 00:33:38.673 [2024-11-26 21:01:33.350117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.673 [2024-11-26 21:01:33.366117] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:38.673 [2024-11-26 21:01:33.374985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:39.240 21:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.240 21:01:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:39.240 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:39.240 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:39.240 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:39.498 [2024-11-26 21:01:34.267522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:39.498 [2024-11-26 21:01:34.267605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:39.498 [2024-11-26 21:01:34.267641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:39.498 [2024-11-26 21:01:34.267653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:39.498 [2024-11-26 21:01:34.267682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:39.498 [2024-11-26 21:01:34.267693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:39.498 [2024-11-26 21:01:34.267704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:39.498 [2024-11-26 21:01:34.267715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:39.498 [2024-11-26 21:01:34.267735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:39.498 [2024-11-26 21:01:34.267747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:39.498 [2024-11-26 21:01:34.267758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:39.498 [2024-11-26 21:01:34.267768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:39.498 [2024-11-26 21:01:34.267840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.302 ms, result 0 00:33:39.498 true 00:33:39.498 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:39.757 { 00:33:39.757 "name": "ftl", 00:33:39.757 "properties": [ 00:33:39.757 { 00:33:39.757 "name": "superblock_version", 00:33:39.757 "value": 5, 00:33:39.757 "read-only": true 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "name": "base_device", 00:33:39.757 "bands": [ 00:33:39.757 { 00:33:39.757 "id": 0, 00:33:39.757 "state": "CLOSED", 00:33:39.757 "validity": 1.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 1, 00:33:39.757 "state": "CLOSED", 00:33:39.757 "validity": 1.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 2, 00:33:39.757 "state": "CLOSED", 00:33:39.757 "validity": 0.007843137254901933 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 3, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 4, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 5, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 6, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 7, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 8, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 9, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 10, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 11, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 12, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 13, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 14, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 15, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 16, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 17, 00:33:39.757 "state": "FREE", 00:33:39.757 "validity": 0.0 00:33:39.757 } 00:33:39.757 ], 00:33:39.757 "read-only": true 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "name": "cache_device", 00:33:39.757 "type": "bdev", 00:33:39.757 "chunks": [ 00:33:39.757 { 00:33:39.757 "id": 0, 00:33:39.757 "state": "INACTIVE", 00:33:39.757 "utilization": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 1, 00:33:39.757 "state": "OPEN", 00:33:39.757 "utilization": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 2, 00:33:39.757 "state": "OPEN", 00:33:39.757 "utilization": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 3, 00:33:39.757 "state": "FREE", 00:33:39.757 "utilization": 0.0 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "id": 4, 00:33:39.757 "state": "FREE", 00:33:39.757 "utilization": 0.0 00:33:39.757 } 00:33:39.757 ], 00:33:39.757 "read-only": true 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "name": "verbose_mode", 00:33:39.757 "value": true, 00:33:39.757 "unit": "", 00:33:39.757 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:39.757 }, 00:33:39.757 { 00:33:39.757 "name": "prep_upgrade_on_shutdown", 00:33:39.757 "value": false, 00:33:39.757 "unit": "", 00:33:39.757 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:39.757 } 00:33:39.757 ] 00:33:39.757 } 00:33:39.757 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:39.757 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:39.757 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:40.016 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:40.016 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:40.016 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:40.016 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:40.016 21:01:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:40.275 Validate MD5 checksum, iteration 1 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:40.275 21:01:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:40.275 [2024-11-26 21:01:35.225635] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:40.275 [2024-11-26 21:01:35.226083] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84461 ] 00:33:40.535 [2024-11-26 21:01:35.414998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.793 [2024-11-26 21:01:35.554106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.697  [2024-11-26T21:01:37.950Z] Copying: 628/1024 [MB] (628 MBps) [2024-11-26T21:01:39.854Z] Copying: 1024/1024 [MB] (average 623 MBps) 00:33:44.860 00:33:44.860 21:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:44.860 21:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=47a9280c1101feb1378814e4255b60c4 00:33:46.761 Validate MD5 checksum, iteration 2 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 47a9280c1101feb1378814e4255b60c4 != \4\7\a\9\2\8\0\c\1\1\0\1\f\e\b\1\3\7\8\8\1\4\e\4\2\5\5\b\6\0\c\4 ]] 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:46.761 21:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:46.761 [2024-11-26 21:01:41.421565] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:46.761 [2024-11-26 21:01:41.422989] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84531 ] 00:33:46.761 [2024-11-26 21:01:41.622432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.019 [2024-11-26 21:01:41.755653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.920  [2024-11-26T21:01:44.172Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-26T21:01:46.701Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:33:51.707 00:33:51.707 21:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:51.707 21:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:53.677 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d5c2c19767858f729c9bbb48b4e375ae 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d5c2c19767858f729c9bbb48b4e375ae != \d\5\c\2\c\1\9\7\6\7\8\5\8\f\7\2\9\c\9\b\b\b\4\8\b\4\e\3\7\5\a\e ]] 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84354 ]] 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84354 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84608 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84608 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84608 ']' 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:53.678 21:01:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:53.678 [2024-11-26 21:01:48.384353] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:53.678 [2024-11-26 21:01:48.384641] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84608 ] 00:33:53.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84354 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:53.678 [2024-11-26 21:01:48.551486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.961 [2024-11-26 21:01:48.657177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.900 [2024-11-26 21:01:49.600393] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:54.900 [2024-11-26 21:01:49.600534] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:54.900 [2024-11-26 21:01:49.746865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.747062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:54.900 [2024-11-26 21:01:49.747186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:54.900 [2024-11-26 21:01:49.747204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.747283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.747298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:54.900 [2024-11-26 21:01:49.747309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:33:54.900 [2024-11-26 21:01:49.747319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.747344] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:54.900 [2024-11-26 21:01:49.748402] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:54.900 [2024-11-26 21:01:49.748440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.748452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:54.900 [2024-11-26 21:01:49.748463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:33:54.900 [2024-11-26 21:01:49.748472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.748855] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:54.900 [2024-11-26 21:01:49.772540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.772578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:54.900 [2024-11-26 21:01:49.772592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.686 ms 00:33:54.900 [2024-11-26 21:01:49.772602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.786409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.786446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:54.900 [2024-11-26 21:01:49.786458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:33:54.900 [2024-11-26 21:01:49.786468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.786979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.786995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:54.900 [2024-11-26 21:01:49.787007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.431 ms 00:33:54.900 [2024-11-26 21:01:49.787017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.787077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.787091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:54.900 [2024-11-26 21:01:49.787102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:33:54.900 [2024-11-26 21:01:49.787111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.787136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.787148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:54.900 [2024-11-26 21:01:49.787158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:54.900 [2024-11-26 21:01:49.787169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.787190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:54.900 [2024-11-26 21:01:49.791391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.791421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:54.900 [2024-11-26 21:01:49.791432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.205 ms 00:33:54.900 [2024-11-26 21:01:49.791461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.791493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.791504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:54.900 [2024-11-26 21:01:49.791515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:54.900 [2024-11-26 21:01:49.791524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.791574] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:54.900 [2024-11-26 21:01:49.791607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:54.900 [2024-11-26 21:01:49.791661] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:54.900 [2024-11-26 21:01:49.791683] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:54.900 [2024-11-26 21:01:49.791773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:54.900 [2024-11-26 21:01:49.791786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:54.900 [2024-11-26 21:01:49.791799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:54.900 [2024-11-26 21:01:49.791812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:54.900 [2024-11-26 21:01:49.791823] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:54.900 [2024-11-26 21:01:49.791835] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:54.900 [2024-11-26 21:01:49.791844] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:54.900 [2024-11-26 21:01:49.791854] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:54.900 [2024-11-26 21:01:49.791863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:54.900 [2024-11-26 21:01:49.791877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.791889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:54.900 [2024-11-26 21:01:49.791899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:33:54.900 [2024-11-26 21:01:49.791909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.791981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.900 [2024-11-26 21:01:49.791992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:54.900 [2024-11-26 21:01:49.792003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:33:54.900 [2024-11-26 21:01:49.792012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.900 [2024-11-26 21:01:49.792115] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:54.900 [2024-11-26 21:01:49.792131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:54.900 [2024-11-26 21:01:49.792142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:54.900 [2024-11-26 21:01:49.792153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.900 [2024-11-26 21:01:49.792163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:54.900 [2024-11-26 21:01:49.792173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:54.900 [2024-11-26 21:01:49.792182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:54.900 [2024-11-26 21:01:49.792191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:54.900 [2024-11-26 21:01:49.792201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:54.900 [2024-11-26 21:01:49.792210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.900 [2024-11-26 21:01:49.792221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:54.901 [2024-11-26 21:01:49.792230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:54.901 [2024-11-26 21:01:49.792239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:54.901 [2024-11-26 21:01:49.792258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:54.901 [2024-11-26 21:01:49.792267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:54.901 [2024-11-26 21:01:49.792286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:54.901 [2024-11-26 21:01:49.792295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:54.901 [2024-11-26 21:01:49.792315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:54.901 [2024-11-26 21:01:49.792354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:54.901 [2024-11-26 21:01:49.792382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:54.901 [2024-11-26 21:01:49.792410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:54.901 [2024-11-26 21:01:49.792437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:54.901 [2024-11-26 21:01:49.792466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:54.901 [2024-11-26 21:01:49.792494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:54.901 [2024-11-26 21:01:49.792522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:54.901 [2024-11-26 21:01:49.792533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:54.901 [2024-11-26 21:01:49.792551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:54.901 [2024-11-26 21:01:49.792561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:54.901 [2024-11-26 21:01:49.792582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:54.901 [2024-11-26 21:01:49.792592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:54.901 [2024-11-26 21:01:49.792600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:54.901 [2024-11-26 21:01:49.792610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:54.901 [2024-11-26 21:01:49.792619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:54.901 [2024-11-26 21:01:49.792640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:54.901 [2024-11-26 21:01:49.792652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:54.901 [2024-11-26 21:01:49.792664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:54.901 [2024-11-26 21:01:49.792687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:54.901 [2024-11-26 21:01:49.792718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:54.901 [2024-11-26 21:01:49.792729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:54.901 [2024-11-26 21:01:49.792740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:54.901 [2024-11-26 21:01:49.792750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:54.901 [2024-11-26 21:01:49.792821] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:54.901 [2024-11-26 21:01:49.792833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:54.901 [2024-11-26 21:01:49.792859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:54.901 [2024-11-26 21:01:49.792869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:54.901 [2024-11-26 21:01:49.792881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:54.901 [2024-11-26 21:01:49.792892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.792903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:54.901 [2024-11-26 21:01:49.792914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:33:54.901 [2024-11-26 21:01:49.792924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.830181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.830222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:54.901 [2024-11-26 21:01:49.830236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.206 ms 00:33:54.901 [2024-11-26 21:01:49.830263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.830307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.830318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:54.901 [2024-11-26 21:01:49.830329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:54.901 [2024-11-26 21:01:49.830338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.875991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.876193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:54.901 [2024-11-26 21:01:49.876215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.585 ms 00:33:54.901 [2024-11-26 21:01:49.876226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.876272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.876284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:54.901 [2024-11-26 21:01:49.876295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:54.901 [2024-11-26 21:01:49.876312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.876456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.876470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:54.901 [2024-11-26 21:01:49.876481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:33:54.901 [2024-11-26 21:01:49.876492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.901 [2024-11-26 21:01:49.876534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.901 [2024-11-26 21:01:49.876546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:54.901 [2024-11-26 21:01:49.876557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:54.901 [2024-11-26 21:01:49.876574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:49.897132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:49.897171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:55.161 [2024-11-26 21:01:49.897185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.534 ms 00:33:55.161 [2024-11-26 21:01:49.897200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:49.897335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:49.897352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:55.161 [2024-11-26 21:01:49.897363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:55.161 [2024-11-26 21:01:49.897374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:49.931997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:49.932163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:55.161 [2024-11-26 21:01:49.932184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.596 ms 00:33:55.161 [2024-11-26 21:01:49.932196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:49.946710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:49.946856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:55.161 [2024-11-26 21:01:49.946893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.678 ms 00:33:55.161 [2024-11-26 21:01:49.946905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:50.032022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:50.032093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:55.161 [2024-11-26 21:01:50.032110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.046 ms 00:33:55.161 [2024-11-26 21:01:50.032121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:50.032341] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:55.161 [2024-11-26 21:01:50.032466] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:55.161 [2024-11-26 21:01:50.032588] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:55.161 [2024-11-26 21:01:50.032737] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:55.161 [2024-11-26 21:01:50.032752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:50.032762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:55.161 [2024-11-26 21:01:50.032773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:33:55.161 [2024-11-26 21:01:50.032800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.161 [2024-11-26 21:01:50.032904] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:55.161 [2024-11-26 21:01:50.032919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.161 [2024-11-26 21:01:50.032934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:55.162 [2024-11-26 21:01:50.032944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:33:55.162 [2024-11-26 21:01:50.032954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.162 [2024-11-26 21:01:50.056112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.162 [2024-11-26 21:01:50.056163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:55.162 [2024-11-26 21:01:50.056177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.126 ms 00:33:55.162 [2024-11-26 21:01:50.056188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.162 [2024-11-26 21:01:50.070141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.162 [2024-11-26 21:01:50.070298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:55.162 [2024-11-26 21:01:50.070319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:55.162 [2024-11-26 21:01:50.070330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.162 [2024-11-26 21:01:50.070442] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:55.162 [2024-11-26 21:01:50.070669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.162 [2024-11-26 21:01:50.070681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:55.162 [2024-11-26 21:01:50.070694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:33:55.162 [2024-11-26 21:01:50.070704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.730 [2024-11-26 21:01:50.676062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.730 [2024-11-26 21:01:50.676121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:55.730 [2024-11-26 21:01:50.676138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 604.116 ms 00:33:55.730 [2024-11-26 21:01:50.676149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.730 [2024-11-26 21:01:50.681845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.730 [2024-11-26 21:01:50.681886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:55.730 [2024-11-26 21:01:50.681900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.025 ms 00:33:55.730 [2024-11-26 21:01:50.681918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.730 [2024-11-26 21:01:50.682507] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:55.730 [2024-11-26 21:01:50.682543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.730 [2024-11-26 21:01:50.682555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:55.730 [2024-11-26 21:01:50.682568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.591 ms 00:33:55.730 [2024-11-26 21:01:50.682578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.730 [2024-11-26 21:01:50.682629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.730 [2024-11-26 21:01:50.682643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:55.730 [2024-11-26 21:01:50.682654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:55.730 [2024-11-26 21:01:50.682669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:55.730 [2024-11-26 21:01:50.682708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 612.268 ms, result 0 00:33:55.730 [2024-11-26 21:01:50.682752] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:55.730 [2024-11-26 21:01:50.682856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:55.730 [2024-11-26 21:01:50.682871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:55.730 [2024-11-26 21:01:50.682882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:33:55.730 [2024-11-26 21:01:50.682891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.259373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.259580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:56.303 [2024-11-26 21:01:51.259645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 575.256 ms 00:33:56.303 [2024-11-26 21:01:51.259660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.265528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.265567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:56.303 [2024-11-26 21:01:51.265580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.130 ms 00:33:56.303 [2024-11-26 21:01:51.265590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.266016] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:56.303 [2024-11-26 21:01:51.266038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.266049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:56.303 [2024-11-26 21:01:51.266061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:33:56.303 [2024-11-26 21:01:51.266071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.266106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.266118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:56.303 [2024-11-26 21:01:51.266129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:56.303 [2024-11-26 21:01:51.266138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.266176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 583.419 ms, result 0 00:33:56.303 [2024-11-26 21:01:51.266221] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:56.303 [2024-11-26 21:01:51.266235] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:56.303 [2024-11-26 21:01:51.266248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.266259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:56.303 [2024-11-26 21:01:51.266270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1195.827 ms 00:33:56.303 [2024-11-26 21:01:51.266280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.266310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.266326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:56.303 [2024-11-26 21:01:51.266338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:56.303 [2024-11-26 21:01:51.266348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.277757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:56.303 [2024-11-26 21:01:51.278015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.278066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:56.303 [2024-11-26 21:01:51.278153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.649 ms 00:33:56.303 [2024-11-26 21:01:51.278189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.278895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.279027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:56.303 [2024-11-26 21:01:51.279111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:33:56.303 [2024-11-26 21:01:51.279148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.281234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.281371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:56.303 [2024-11-26 21:01:51.281493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.037 ms 00:33:56.303 [2024-11-26 21:01:51.281532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.281602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.281757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:56.303 [2024-11-26 21:01:51.281805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:56.303 [2024-11-26 21:01:51.281836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.281968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.282004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:56.303 [2024-11-26 21:01:51.282036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:56.303 [2024-11-26 21:01:51.282066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.282166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.282183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:56.303 [2024-11-26 21:01:51.282195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:56.303 [2024-11-26 21:01:51.282205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.282256] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:56.303 [2024-11-26 21:01:51.282269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.282279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:56.303 [2024-11-26 21:01:51.282290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:56.303 [2024-11-26 21:01:51.282300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.282353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:56.303 [2024-11-26 21:01:51.282365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:56.303 [2024-11-26 21:01:51.282376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:33:56.303 [2024-11-26 21:01:51.282386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:56.303 [2024-11-26 21:01:51.283409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1536.047 ms, result 0 00:33:56.562 [2024-11-26 21:01:51.298586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.562 [2024-11-26 21:01:51.314557] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:56.562 [2024-11-26 21:01:51.324110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:56.562 Validate MD5 checksum, iteration 1 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:56.562 21:01:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:56.562 [2024-11-26 21:01:51.494440] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:33:56.562 [2024-11-26 21:01:51.494873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84645 ] 00:33:56.821 [2024-11-26 21:01:51.703303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.078 [2024-11-26 21:01:51.867077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.983  [2024-11-26T21:01:54.235Z] Copying: 640/1024 [MB] (640 MBps) [2024-11-26T21:01:56.765Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:34:01.771 00:34:01.771 21:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:01.771 21:01:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:03.675 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:03.675 Validate MD5 checksum, iteration 2 00:34:03.675 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=47a9280c1101feb1378814e4255b60c4 00:34:03.675 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 47a9280c1101feb1378814e4255b60c4 != \4\7\a\9\2\8\0\c\1\1\0\1\f\e\b\1\3\7\8\8\1\4\e\4\2\5\5\b\6\0\c\4 ]] 00:34:03.675 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:03.676 21:01:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:03.676 [2024-11-26 21:01:58.584494] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:03.676 [2024-11-26 21:01:58.584686] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84723 ] 00:34:03.934 [2024-11-26 21:01:58.787583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.195 [2024-11-26 21:01:58.963948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.099  [2024-11-26T21:02:01.352Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-26T21:02:02.727Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:34:07.733 00:34:07.992 21:02:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:07.992 21:02:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d5c2c19767858f729c9bbb48b4e375ae 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d5c2c19767858f729c9bbb48b4e375ae != \d\5\c\2\c\1\9\7\6\7\8\5\8\f\7\2\9\c\9\b\b\b\4\8\b\4\e\3\7\5\a\e ]] 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84608 ]] 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84608 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84608 ']' 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84608 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84608 00:34:09.897 killing process with pid 84608 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84608' 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84608 00:34:09.897 21:02:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84608 00:34:11.277 [2024-11-26 21:02:05.944887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:11.277 [2024-11-26 21:02:05.964190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.964242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:11.277 [2024-11-26 21:02:05.964264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:11.277 [2024-11-26 21:02:05.964278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.964310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:11.277 [2024-11-26 21:02:05.968347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.968387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:11.277 [2024-11-26 21:02:05.968411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.015 ms 00:34:11.277 [2024-11-26 21:02:05.968426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.968701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.968721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:11.277 [2024-11-26 21:02:05.968737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:34:11.277 [2024-11-26 21:02:05.968752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.970029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.970070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:11.277 [2024-11-26 21:02:05.970088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:34:11.277 [2024-11-26 21:02:05.970111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.971117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.971153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:11.277 [2024-11-26 21:02:05.971170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.958 ms 00:34:11.277 [2024-11-26 21:02:05.971184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.986491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.986700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:11.277 [2024-11-26 21:02:05.986735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.238 ms 00:34:11.277 [2024-11-26 21:02:05.986750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.994647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.994686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:11.277 [2024-11-26 21:02:05.994702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.849 ms 00:34:11.277 [2024-11-26 21:02:05.994714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:05.994831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:05.994847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:11.277 [2024-11-26 21:02:05.994861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:34:11.277 [2024-11-26 21:02:05.994880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:06.009438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:06.009476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:11.277 [2024-11-26 21:02:06.009499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.536 ms 00:34:11.277 [2024-11-26 21:02:06.009510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:06.023684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:06.023728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:11.277 [2024-11-26 21:02:06.023742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.134 ms 00:34:11.277 [2024-11-26 21:02:06.023753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:06.037513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:06.037705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:11.277 [2024-11-26 21:02:06.037730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.717 ms 00:34:11.277 [2024-11-26 21:02:06.037743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:06.052275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.277 [2024-11-26 21:02:06.052314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:11.277 [2024-11-26 21:02:06.052328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.421 ms 00:34:11.277 [2024-11-26 21:02:06.052339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.277 [2024-11-26 21:02:06.052380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:11.277 [2024-11-26 21:02:06.052400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:11.277 [2024-11-26 21:02:06.052415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:11.277 [2024-11-26 21:02:06.052429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:11.277 [2024-11-26 21:02:06.052442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:11.277 [2024-11-26 21:02:06.052645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:11.277 [2024-11-26 21:02:06.052657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2ecaea59-977f-4218-9e67-d93680c5cb2f 00:34:11.277 [2024-11-26 21:02:06.052670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:11.277 [2024-11-26 21:02:06.052682] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:11.277 [2024-11-26 21:02:06.052693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:11.277 [2024-11-26 21:02:06.052706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:11.277 [2024-11-26 21:02:06.052717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:11.277 [2024-11-26 21:02:06.052745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:11.278 [2024-11-26 21:02:06.052764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:11.278 [2024-11-26 21:02:06.052775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:11.278 [2024-11-26 21:02:06.052786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:11.278 [2024-11-26 21:02:06.052799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.278 [2024-11-26 21:02:06.052813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:11.278 [2024-11-26 21:02:06.052826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:34:11.278 [2024-11-26 21:02:06.052837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.073202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.278 [2024-11-26 21:02:06.073238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:11.278 [2024-11-26 21:02:06.073254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.326 ms 00:34:11.278 [2024-11-26 21:02:06.073267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.073869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.278 [2024-11-26 21:02:06.073892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:11.278 [2024-11-26 21:02:06.073906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.568 ms 00:34:11.278 [2024-11-26 21:02:06.073917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.140980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.278 [2024-11-26 21:02:06.141025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:11.278 [2024-11-26 21:02:06.141041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.278 [2024-11-26 21:02:06.141060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.141101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.278 [2024-11-26 21:02:06.141114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:11.278 [2024-11-26 21:02:06.141127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.278 [2024-11-26 21:02:06.141139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.141235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.278 [2024-11-26 21:02:06.141252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:11.278 [2024-11-26 21:02:06.141264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.278 [2024-11-26 21:02:06.141276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.141306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.278 [2024-11-26 21:02:06.141319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:11.278 [2024-11-26 21:02:06.141332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.278 [2024-11-26 21:02:06.141344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.278 [2024-11-26 21:02:06.267237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.278 [2024-11-26 21:02:06.267524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:11.278 [2024-11-26 21:02:06.267560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.278 [2024-11-26 21:02:06.267574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.370541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.370845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:11.538 [2024-11-26 21:02:06.370875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.370889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:11.538 [2024-11-26 21:02:06.371091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:11.538 [2024-11-26 21:02:06.371223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:11.538 [2024-11-26 21:02:06.371440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:11.538 [2024-11-26 21:02:06.371549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:11.538 [2024-11-26 21:02:06.371668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:11.538 [2024-11-26 21:02:06.371759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:11.538 [2024-11-26 21:02:06.371772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:11.538 [2024-11-26 21:02:06.371784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.538 [2024-11-26 21:02:06.371959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 407.723 ms, result 0 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:12.918 Remove shared memory files 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84354 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:12.918 ************************************ 00:34:12.918 END TEST ftl_upgrade_shutdown 00:34:12.918 ************************************ 00:34:12.918 00:34:12.918 real 1m33.930s 00:34:12.918 user 2m6.248s 00:34:12.918 sys 0m26.552s 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.918 21:02:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:12.918 21:02:07 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:12.918 21:02:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:12.918 21:02:07 ftl -- ftl/ftl.sh@14 -- # killprocess 77428 00:34:12.918 21:02:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 77428 ']' 00:34:12.918 Process with pid 77428 is not found 00:34:12.918 21:02:07 ftl -- common/autotest_common.sh@958 -- # kill -0 77428 00:34:12.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77428) - No such process 00:34:12.918 21:02:07 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77428 is not found' 00:34:12.918 21:02:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:12.918 21:02:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84852 00:34:12.919 21:02:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:12.919 21:02:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84852 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@835 -- # '[' -z 84852 ']' 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.919 21:02:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:12.919 [2024-11-26 21:02:07.848332] Starting SPDK v25.01-pre git sha1 2f2acf4eb / DPDK 24.03.0 initialization... 00:34:12.919 [2024-11-26 21:02:07.848884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84852 ] 00:34:13.178 [2024-11-26 21:02:08.042586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.178 [2024-11-26 21:02:08.154412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.116 21:02:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.116 21:02:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:14.116 21:02:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:14.379 nvme0n1 00:34:14.379 21:02:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:14.379 21:02:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:14.379 21:02:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:14.637 21:02:09 ftl -- ftl/common.sh@28 -- # stores=97b3433f-a4d5-48ac-87ff-71d25dc92b79 00:34:14.637 21:02:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:14.637 21:02:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97b3433f-a4d5-48ac-87ff-71d25dc92b79 00:34:14.895 21:02:09 ftl -- ftl/ftl.sh@23 -- # killprocess 84852 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 84852 ']' 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@958 -- # kill -0 84852 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@959 -- # uname 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84852 00:34:14.895 killing process with pid 84852 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84852' 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@973 -- # kill 84852 00:34:14.895 21:02:09 ftl -- common/autotest_common.sh@978 -- # wait 84852 00:34:17.427 21:02:12 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:17.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:17.686 Waiting for block devices as requested 00:34:17.686 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:17.945 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:17.945 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:17.945 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:23.220 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:23.220 21:02:18 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:23.220 Remove shared memory files 00:34:23.220 21:02:18 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:23.220 21:02:18 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:23.220 21:02:18 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:23.220 21:02:18 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:23.220 21:02:18 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:23.220 21:02:18 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:23.220 ************************************ 00:34:23.220 END TEST ftl 00:34:23.220 ************************************ 00:34:23.220 00:34:23.220 real 10m57.635s 00:34:23.220 user 13m35.246s 00:34:23.220 sys 1m34.060s 00:34:23.220 21:02:18 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.220 21:02:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:23.220 21:02:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:23.220 21:02:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:23.220 21:02:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:23.220 21:02:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:23.220 21:02:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:23.220 21:02:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:23.220 21:02:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:23.220 21:02:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:23.220 21:02:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:23.220 21:02:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:23.220 21:02:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.220 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:34:23.220 21:02:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:23.220 21:02:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:23.220 21:02:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:23.220 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:34:25.757 INFO: APP EXITING 00:34:25.757 INFO: killing all VMs 00:34:25.757 INFO: killing vhost app 00:34:25.757 INFO: EXIT DONE 00:34:26.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:26.585 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:26.585 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:26.585 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:26.585 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:27.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:27.722 Cleaning 00:34:27.722 Removing: /var/run/dpdk/spdk0/config 00:34:27.722 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:27.722 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:27.722 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:27.722 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:27.722 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:27.722 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:27.722 Removing: /var/run/dpdk/spdk0 00:34:27.722 Removing: /var/run/dpdk/spdk_pid57759 00:34:27.722 Removing: /var/run/dpdk/spdk_pid58022 00:34:27.722 Removing: /var/run/dpdk/spdk_pid58267 00:34:27.722 Removing: /var/run/dpdk/spdk_pid58377 00:34:27.723 Removing: /var/run/dpdk/spdk_pid58444 00:34:27.723 Removing: /var/run/dpdk/spdk_pid58583 00:34:27.723 Removing: /var/run/dpdk/spdk_pid58601 00:34:27.723 Removing: /var/run/dpdk/spdk_pid58822 00:34:27.723 Removing: /var/run/dpdk/spdk_pid58943 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59061 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59191 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59304 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59344 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59380 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59456 00:34:27.723 Removing: /var/run/dpdk/spdk_pid59581 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60077 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60169 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60254 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60275 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60451 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60473 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60643 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60670 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60745 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60774 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60855 00:34:27.723 Removing: /var/run/dpdk/spdk_pid60878 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61091 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61133 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61222 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61428 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61534 00:34:27.723 Removing: /var/run/dpdk/spdk_pid61587 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62090 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62194 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62325 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62378 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62409 00:34:27.723 Removing: /var/run/dpdk/spdk_pid62493 00:34:27.723 Removing: /var/run/dpdk/spdk_pid63151 00:34:27.723 Removing: /var/run/dpdk/spdk_pid63198 00:34:27.723 Removing: /var/run/dpdk/spdk_pid63724 00:34:27.723 Removing: /var/run/dpdk/spdk_pid63833 00:34:27.723 Removing: /var/run/dpdk/spdk_pid63959 00:34:27.723 Removing: /var/run/dpdk/spdk_pid64012 00:34:27.723 Removing: /var/run/dpdk/spdk_pid64043 00:34:27.723 Removing: /var/run/dpdk/spdk_pid64073 00:34:27.723 Removing: /var/run/dpdk/spdk_pid65984 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66140 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66148 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66167 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66213 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66217 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66239 00:34:27.723 Removing: /var/run/dpdk/spdk_pid66279 00:34:27.982 Removing: /var/run/dpdk/spdk_pid66283 00:34:27.982 Removing: /var/run/dpdk/spdk_pid66301 00:34:27.982 Removing: /var/run/dpdk/spdk_pid66347 00:34:27.982 Removing: /var/run/dpdk/spdk_pid66351 00:34:27.982 Removing: /var/run/dpdk/spdk_pid66369 00:34:27.982 Removing: /var/run/dpdk/spdk_pid67783 00:34:27.982 Removing: /var/run/dpdk/spdk_pid67902 00:34:27.982 Removing: /var/run/dpdk/spdk_pid69326 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71067 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71152 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71234 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71348 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71442 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71543 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71630 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71709 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71826 00:34:27.982 Removing: /var/run/dpdk/spdk_pid71923 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72025 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72111 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72192 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72306 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72403 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72504 00:34:27.982 Removing: /var/run/dpdk/spdk_pid72588 00:34:27.983 Removing: /var/run/dpdk/spdk_pid72669 00:34:27.983 Removing: /var/run/dpdk/spdk_pid72780 00:34:27.983 Removing: /var/run/dpdk/spdk_pid72877 00:34:27.983 Removing: /var/run/dpdk/spdk_pid72979 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73064 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73138 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73218 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73298 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73407 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73502 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73604 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73684 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73765 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73839 00:34:27.983 Removing: /var/run/dpdk/spdk_pid73919 00:34:27.983 Removing: /var/run/dpdk/spdk_pid74032 00:34:27.983 Removing: /var/run/dpdk/spdk_pid74130 00:34:27.983 Removing: /var/run/dpdk/spdk_pid74280 00:34:27.983 Removing: /var/run/dpdk/spdk_pid74582 00:34:27.983 Removing: /var/run/dpdk/spdk_pid74624 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75094 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75284 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75385 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75506 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75565 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75591 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75887 00:34:27.983 Removing: /var/run/dpdk/spdk_pid75953 00:34:27.983 Removing: /var/run/dpdk/spdk_pid76046 00:34:27.983 Removing: /var/run/dpdk/spdk_pid76480 00:34:27.983 Removing: /var/run/dpdk/spdk_pid76622 00:34:27.983 Removing: /var/run/dpdk/spdk_pid77428 00:34:27.983 Removing: /var/run/dpdk/spdk_pid77580 00:34:27.983 Removing: /var/run/dpdk/spdk_pid77780 00:34:27.983 Removing: /var/run/dpdk/spdk_pid77885 00:34:27.983 Removing: /var/run/dpdk/spdk_pid78222 00:34:27.983 Removing: /var/run/dpdk/spdk_pid78488 00:34:27.983 Removing: /var/run/dpdk/spdk_pid78836 00:34:27.983 Removing: /var/run/dpdk/spdk_pid79047 00:34:27.983 Removing: /var/run/dpdk/spdk_pid79172 00:34:27.983 Removing: /var/run/dpdk/spdk_pid79247 00:34:27.983 Removing: /var/run/dpdk/spdk_pid79374 00:34:28.242 Removing: /var/run/dpdk/spdk_pid79409 00:34:28.242 Removing: /var/run/dpdk/spdk_pid79474 00:34:28.242 Removing: /var/run/dpdk/spdk_pid79673 00:34:28.242 Removing: /var/run/dpdk/spdk_pid79904 00:34:28.242 Removing: /var/run/dpdk/spdk_pid80306 00:34:28.242 Removing: /var/run/dpdk/spdk_pid80704 00:34:28.242 Removing: /var/run/dpdk/spdk_pid81101 00:34:28.242 Removing: /var/run/dpdk/spdk_pid81562 00:34:28.242 Removing: /var/run/dpdk/spdk_pid81706 00:34:28.242 Removing: /var/run/dpdk/spdk_pid81808 00:34:28.242 Removing: /var/run/dpdk/spdk_pid82449 00:34:28.242 Removing: /var/run/dpdk/spdk_pid82519 00:34:28.242 Removing: /var/run/dpdk/spdk_pid82939 00:34:28.242 Removing: /var/run/dpdk/spdk_pid83277 00:34:28.242 Removing: /var/run/dpdk/spdk_pid83753 00:34:28.242 Removing: /var/run/dpdk/spdk_pid83899 00:34:28.242 Removing: /var/run/dpdk/spdk_pid83957 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84021 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84085 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84149 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84354 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84461 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84531 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84608 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84645 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84723 00:34:28.242 Removing: /var/run/dpdk/spdk_pid84852 00:34:28.242 Clean 00:34:28.242 21:02:23 -- common/autotest_common.sh@1453 -- # return 0 00:34:28.242 21:02:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:28.242 21:02:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.242 21:02:23 -- common/autotest_common.sh@10 -- # set +x 00:34:28.242 21:02:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:28.242 21:02:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.242 21:02:23 -- common/autotest_common.sh@10 -- # set +x 00:34:28.501 21:02:23 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:28.501 21:02:23 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:28.501 21:02:23 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:28.501 21:02:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:28.501 21:02:23 -- spdk/autotest.sh@398 -- # hostname 00:34:28.501 21:02:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:28.760 geninfo: WARNING: invalid characters removed from testname! 00:34:55.366 21:02:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:55.366 21:02:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:57.275 21:02:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:59.814 21:02:54 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.722 21:02:56 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.625 21:02:58 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:05.531 21:03:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:05.531 21:03:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:05.531 21:03:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:05.531 21:03:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:05.531 21:03:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:05.531 21:03:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:05.790 + [[ -n 5308 ]] 00:35:05.790 + sudo kill 5308 00:35:05.799 [Pipeline] } 00:35:05.814 [Pipeline] // timeout 00:35:05.819 [Pipeline] } 00:35:05.835 [Pipeline] // stage 00:35:05.840 [Pipeline] } 00:35:05.853 [Pipeline] // catchError 00:35:05.862 [Pipeline] stage 00:35:05.864 [Pipeline] { (Stop VM) 00:35:05.875 [Pipeline] sh 00:35:06.158 + vagrant halt 00:35:09.446 ==> default: Halting domain... 00:35:16.024 [Pipeline] sh 00:35:16.305 + vagrant destroy -f 00:35:18.840 ==> default: Removing domain... 00:35:19.419 [Pipeline] sh 00:35:19.699 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:19.708 [Pipeline] } 00:35:19.722 [Pipeline] // stage 00:35:19.726 [Pipeline] } 00:35:19.739 [Pipeline] // dir 00:35:19.744 [Pipeline] } 00:35:19.757 [Pipeline] // wrap 00:35:19.762 [Pipeline] } 00:35:19.774 [Pipeline] // catchError 00:35:19.782 [Pipeline] stage 00:35:19.784 [Pipeline] { (Epilogue) 00:35:19.796 [Pipeline] sh 00:35:20.079 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:25.360 [Pipeline] catchError 00:35:25.362 [Pipeline] { 00:35:25.376 [Pipeline] sh 00:35:25.662 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:25.920 Artifacts sizes are good 00:35:25.929 [Pipeline] } 00:35:25.942 [Pipeline] // catchError 00:35:25.954 [Pipeline] archiveArtifacts 00:35:25.961 Archiving artifacts 00:35:26.086 [Pipeline] cleanWs 00:35:26.100 [WS-CLEANUP] Deleting project workspace... 00:35:26.100 [WS-CLEANUP] Deferred wipeout is used... 00:35:26.129 [WS-CLEANUP] done 00:35:26.131 [Pipeline] } 00:35:26.145 [Pipeline] // stage 00:35:26.150 [Pipeline] } 00:35:26.163 [Pipeline] // node 00:35:26.167 [Pipeline] End of Pipeline 00:35:26.201 Finished: SUCCESS